Dec 13 22:16:40 crc systemd[1]: Starting Kubernetes Kubelet... Dec 13 22:16:41 crc restorecon[4657]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 22:16:41 crc restorecon[4657]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 13 22:16:42 crc kubenswrapper[4866]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:16:42 crc kubenswrapper[4866]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 13 22:16:42 crc kubenswrapper[4866]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:16:42 crc kubenswrapper[4866]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:16:42 crc kubenswrapper[4866]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 22:16:42 crc kubenswrapper[4866]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.056815 4866 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059848 4866 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059865 4866 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059871 4866 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059877 4866 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059881 4866 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059885 4866 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059889 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059894 4866 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059898 4866 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059902 4866 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059906 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059910 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059915 4866 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059927 4866 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059930 4866 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059934 4866 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059938 4866 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059941 4866 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059945 4866 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059949 4866 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059952 4866 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059956 4866 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059959 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059963 4866 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059966 4866 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059970 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059973 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059976 4866 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059979 4866 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059983 4866 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059986 4866 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059990 4866 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059993 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.059998 4866 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060003 4866 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060007 4866 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060011 4866 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060015 4866 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060018 4866 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060022 4866 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060026 4866 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060029 4866 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060033 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060036 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060040 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060043 4866 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060061 4866 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060065 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060068 4866 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060072 4866 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060075 4866 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060079 4866 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060082 4866 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060088 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060092 4866 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060096 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060100 4866 feature_gate.go:330] unrecognized feature gate: Example Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060104 4866 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060107 4866 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060110 4866 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060115 4866 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060120 4866 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060124 4866 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060128 4866 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060132 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060136 4866 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060141 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060145 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060149 4866 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060153 4866 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.060159 4866 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060230 4866 flags.go:64] FLAG: --address="0.0.0.0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060238 4866 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060244 4866 flags.go:64] FLAG: --anonymous-auth="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060249 4866 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060254 4866 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060258 4866 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060264 4866 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060269 4866 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060274 4866 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060278 4866 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060282 4866 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060286 4866 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060291 4866 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060295 4866 flags.go:64] FLAG: --cgroup-root="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060299 4866 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060303 4866 flags.go:64] FLAG: --client-ca-file="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060307 4866 flags.go:64] FLAG: --cloud-config="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060311 4866 flags.go:64] FLAG: --cloud-provider="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060314 4866 flags.go:64] FLAG: --cluster-dns="[]" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060319 4866 flags.go:64] FLAG: --cluster-domain="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060322 4866 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060327 4866 flags.go:64] FLAG: --config-dir="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060330 4866 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060335 4866 flags.go:64] FLAG: --container-log-max-files="5" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060340 4866 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060344 4866 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060348 4866 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060352 4866 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060356 4866 flags.go:64] FLAG: --contention-profiling="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060361 4866 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060366 4866 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060371 4866 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060376 4866 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060384 4866 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060388 4866 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060394 4866 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060398 4866 flags.go:64] FLAG: --enable-load-reader="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060403 4866 flags.go:64] FLAG: --enable-server="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060408 4866 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060414 4866 flags.go:64] FLAG: --event-burst="100" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060419 4866 flags.go:64] FLAG: --event-qps="50" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060423 4866 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060427 4866 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060431 4866 flags.go:64] FLAG: --eviction-hard="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060436 4866 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060442 4866 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060446 4866 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060450 4866 flags.go:64] FLAG: --eviction-soft="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060454 4866 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060458 4866 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060462 4866 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060466 4866 flags.go:64] FLAG: --experimental-mounter-path="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060471 4866 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060475 4866 flags.go:64] FLAG: --fail-swap-on="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060479 4866 flags.go:64] FLAG: --feature-gates="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060484 4866 flags.go:64] FLAG: --file-check-frequency="20s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060488 4866 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060492 4866 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060496 4866 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060500 4866 flags.go:64] FLAG: --healthz-port="10248" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060504 4866 flags.go:64] FLAG: --help="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060508 4866 flags.go:64] FLAG: --hostname-override="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060512 4866 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060516 4866 flags.go:64] FLAG: --http-check-frequency="20s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060520 4866 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060523 4866 flags.go:64] FLAG: --image-credential-provider-config="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060527 4866 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060534 4866 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060538 4866 flags.go:64] FLAG: --image-service-endpoint="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060543 4866 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060547 4866 flags.go:64] FLAG: --kube-api-burst="100" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060551 4866 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060555 4866 flags.go:64] FLAG: --kube-api-qps="50" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060559 4866 flags.go:64] FLAG: --kube-reserved="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060563 4866 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060567 4866 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060571 4866 flags.go:64] FLAG: --kubelet-cgroups="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060575 4866 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060579 4866 flags.go:64] FLAG: --lock-file="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060583 4866 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060587 4866 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060591 4866 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060597 4866 flags.go:64] FLAG: --log-json-split-stream="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060601 4866 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060605 4866 flags.go:64] FLAG: --log-text-split-stream="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060609 4866 flags.go:64] FLAG: --logging-format="text" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060613 4866 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060617 4866 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060621 4866 flags.go:64] FLAG: --manifest-url="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060625 4866 flags.go:64] FLAG: --manifest-url-header="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060631 4866 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060635 4866 flags.go:64] FLAG: --max-open-files="1000000" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060639 4866 flags.go:64] FLAG: --max-pods="110" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060643 4866 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060648 4866 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060652 4866 flags.go:64] FLAG: --memory-manager-policy="None" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060656 4866 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060659 4866 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060663 4866 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060669 4866 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060704 4866 flags.go:64] FLAG: --node-status-max-images="50" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060709 4866 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060713 4866 flags.go:64] FLAG: --oom-score-adj="-999" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060717 4866 flags.go:64] FLAG: --pod-cidr="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060721 4866 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060728 4866 flags.go:64] FLAG: --pod-manifest-path="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060732 4866 flags.go:64] FLAG: --pod-max-pids="-1" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060737 4866 flags.go:64] FLAG: --pods-per-core="0" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060741 4866 flags.go:64] FLAG: --port="10250" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060746 4866 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060750 4866 flags.go:64] FLAG: --provider-id="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060754 4866 flags.go:64] FLAG: --qos-reserved="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060757 4866 flags.go:64] FLAG: --read-only-port="10255" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060761 4866 flags.go:64] FLAG: --register-node="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060765 4866 flags.go:64] FLAG: --register-schedulable="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060769 4866 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060776 4866 flags.go:64] FLAG: --registry-burst="10" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060780 4866 flags.go:64] FLAG: --registry-qps="5" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060784 4866 flags.go:64] FLAG: --reserved-cpus="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060787 4866 flags.go:64] FLAG: --reserved-memory="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060793 4866 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060797 4866 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060801 4866 flags.go:64] FLAG: --rotate-certificates="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060805 4866 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060809 4866 flags.go:64] FLAG: --runonce="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060813 4866 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060817 4866 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060821 4866 flags.go:64] FLAG: --seccomp-default="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060825 4866 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060829 4866 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060833 4866 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060839 4866 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060844 4866 flags.go:64] FLAG: --storage-driver-password="root" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060848 4866 flags.go:64] FLAG: --storage-driver-secure="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060852 4866 flags.go:64] FLAG: --storage-driver-table="stats" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060856 4866 flags.go:64] FLAG: --storage-driver-user="root" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060860 4866 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060864 4866 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060869 4866 flags.go:64] FLAG: --system-cgroups="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060872 4866 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060879 4866 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060883 4866 flags.go:64] FLAG: --tls-cert-file="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060887 4866 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060891 4866 flags.go:64] FLAG: --tls-min-version="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060895 4866 flags.go:64] FLAG: --tls-private-key-file="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060899 4866 flags.go:64] FLAG: --topology-manager-policy="none" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060903 4866 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060907 4866 flags.go:64] FLAG: --topology-manager-scope="container" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060911 4866 flags.go:64] FLAG: --v="2" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060917 4866 flags.go:64] FLAG: --version="false" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060922 4866 flags.go:64] FLAG: --vmodule="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060927 4866 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.060931 4866 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061047 4866 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061068 4866 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061072 4866 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061076 4866 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061079 4866 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061082 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061086 4866 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061090 4866 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061093 4866 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061097 4866 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061102 4866 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061106 4866 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061109 4866 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061112 4866 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061116 4866 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061120 4866 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061125 4866 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061129 4866 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061133 4866 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061137 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061141 4866 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061144 4866 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061148 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061151 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061157 4866 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061160 4866 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061164 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061169 4866 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061173 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061177 4866 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061181 4866 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061184 4866 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061188 4866 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061191 4866 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061195 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061199 4866 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061203 4866 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061207 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061212 4866 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061216 4866 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061219 4866 feature_gate.go:330] unrecognized feature gate: Example Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061223 4866 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061228 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061231 4866 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061235 4866 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061238 4866 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061241 4866 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061245 4866 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061248 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061251 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061255 4866 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061258 4866 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061262 4866 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061265 4866 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061268 4866 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061272 4866 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061275 4866 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061279 4866 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061282 4866 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061286 4866 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061291 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061296 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061299 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061303 4866 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061306 4866 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061310 4866 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061313 4866 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061317 4866 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061321 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061324 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.061328 4866 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.061340 4866 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.069700 4866 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.069725 4866 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069861 4866 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069869 4866 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069874 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069882 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069886 4866 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069890 4866 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069894 4866 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069897 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069901 4866 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069905 4866 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069909 4866 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069912 4866 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069916 4866 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069919 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069924 4866 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069927 4866 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069933 4866 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069937 4866 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069941 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069945 4866 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069948 4866 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069952 4866 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069955 4866 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069959 4866 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069962 4866 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069966 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069969 4866 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069974 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069981 4866 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069987 4866 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069993 4866 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.069998 4866 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070001 4866 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070005 4866 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070011 4866 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070015 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070019 4866 feature_gate.go:330] unrecognized feature gate: Example Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070022 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070026 4866 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070030 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070034 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070041 4866 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070062 4866 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070067 4866 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070072 4866 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070077 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070081 4866 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070086 4866 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070090 4866 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070094 4866 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070098 4866 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070103 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070107 4866 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070114 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070117 4866 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070121 4866 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070125 4866 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070128 4866 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070132 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070138 4866 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070142 4866 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070147 4866 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070152 4866 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070156 4866 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070161 4866 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070167 4866 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070171 4866 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070175 4866 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070178 4866 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070182 4866 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070187 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.070195 4866 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070442 4866 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070451 4866 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070455 4866 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070459 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070463 4866 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070466 4866 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070470 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070475 4866 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070478 4866 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070486 4866 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070490 4866 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070494 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070497 4866 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070502 4866 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070507 4866 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070512 4866 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070515 4866 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070519 4866 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070523 4866 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070527 4866 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070531 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070537 4866 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070541 4866 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070544 4866 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070549 4866 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070553 4866 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070557 4866 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070561 4866 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070565 4866 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070570 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070574 4866 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070578 4866 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070582 4866 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070590 4866 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070596 4866 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070609 4866 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070613 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070617 4866 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070621 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070624 4866 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070628 4866 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070632 4866 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070636 4866 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070640 4866 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070644 4866 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070648 4866 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070662 4866 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070666 4866 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070669 4866 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070674 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070678 4866 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070682 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070687 4866 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070733 4866 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.070899 4866 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071232 4866 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071249 4866 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071255 4866 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071263 4866 feature_gate.go:330] unrecognized feature gate: Example Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071268 4866 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071274 4866 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071280 4866 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071285 4866 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071295 4866 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071588 4866 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071595 4866 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071601 4866 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071606 4866 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071611 4866 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071619 4866 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.071625 4866 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.071635 4866 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.071993 4866 server.go:940] "Client rotation is on, will bootstrap in background" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.075417 4866 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.075714 4866 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.076407 4866 server.go:997] "Starting client certificate rotation" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.076446 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.076684 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-08 19:06:56.684785882 +0000 UTC Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.076902 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.082763 4866 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.084161 4866 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.086082 4866 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.095888 4866 log.go:25] "Validated CRI v1 runtime API" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.116628 4866 log.go:25] "Validated CRI v1 image API" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.118232 4866 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.120141 4866 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-13-22-11-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.120245 4866 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.131957 4866 manager.go:217] Machine: {Timestamp:2025-12-13 22:16:42.129489606 +0000 UTC m=+0.170828178 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:36cffe06-8718-49d3-bf76-a8b562df5fba BootID:cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5f:2d:64 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5f:2d:64 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1f:b2:06 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:30:ff:3d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:59:52:3c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:79:ee:ea Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:12:b9:7f:41:e5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:a9:9c:6c:e9:2b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.132982 4866 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.133287 4866 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.133562 4866 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.133761 4866 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.133798 4866 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.134013 4866 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.134023 4866 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.134191 4866 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.134214 4866 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.134653 4866 state_mem.go:36] "Initialized new in-memory state store" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.134735 4866 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.135886 4866 kubelet.go:418] "Attempting to sync node with API server" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.135909 4866 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.135949 4866 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.135965 4866 kubelet.go:324] "Adding apiserver pod source" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.135979 4866 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.137913 4866 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.138097 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.138236 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.138211 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.138297 4866 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.138324 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.138923 4866 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139402 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139431 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139440 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139447 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139462 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139472 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139482 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139497 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139510 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139533 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139545 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139552 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.139706 4866 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.140106 4866 server.go:1280] "Started kubelet" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.140295 4866 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.141195 4866 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.141576 4866 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142186 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:42 crc systemd[1]: Started Kubernetes Kubelet. Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142554 4866 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142777 4866 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142807 4866 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:28:52.82205424 +0000 UTC Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142841 4866 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 626h12m10.679215044s for next certificate rotation Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142863 4866 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.142869 4866 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.143129 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.143282 4866 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.143714 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.143788 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.144195 4866 factory.go:55] Registering systemd factory Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.144223 4866 factory.go:221] Registration of the systemd container factory successfully Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.145286 4866 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880e63ca92ea7f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 22:16:42.140067828 +0000 UTC m=+0.181406380,LastTimestamp:2025-12-13 22:16:42.140067828 +0000 UTC m=+0.181406380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.146207 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.147183 4866 factory.go:153] Registering CRI-O factory Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.147309 4866 factory.go:221] Registration of the crio container factory successfully Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.147441 4866 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.147533 4866 factory.go:103] Registering Raw factory Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.147631 4866 manager.go:1196] Started watching for new ooms in manager Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.148679 4866 manager.go:319] Starting recovery of all containers Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.151238 4866 server.go:460] "Adding debug handlers to kubelet server" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156080 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156161 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156191 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156224 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156235 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156256 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156264 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156318 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156330 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156377 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156414 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156427 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156573 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156599 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156618 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156629 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156717 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156728 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156750 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156758 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156768 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156776 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156794 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156864 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156873 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156881 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156897 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156979 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156987 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.156997 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157006 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157016 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157144 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157154 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157166 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157174 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157185 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157319 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157331 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157343 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157352 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157363 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157384 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157469 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157481 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157491 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157502 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157511 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157589 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157601 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157611 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157623 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157746 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157764 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157792 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157924 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157937 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157946 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157958 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157967 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.157975 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158087 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158097 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158110 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158122 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158134 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158255 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158266 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158278 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158289 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158299 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158312 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158320 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158334 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158344 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158353 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158392 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158402 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158423 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158433 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158443 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158455 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158465 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158478 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158608 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158619 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158632 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158641 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158653 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158662 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158825 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158838 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158847 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158858 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158869 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158976 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158990 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.158999 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159010 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159021 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159133 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159154 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159167 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159184 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159376 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159393 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159407 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159418 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159431 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159512 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159525 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159538 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159549 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159585 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159637 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159647 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159658 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159671 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159681 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159720 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159730 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159739 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159751 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159761 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159773 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159852 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159862 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.159874 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.173698 4866 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175323 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175343 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175355 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175372 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175382 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175393 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175403 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175415 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175429 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175440 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175450 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175461 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175473 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175482 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175528 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175538 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175548 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175560 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175572 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175582 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175592 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175604 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175616 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175628 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175637 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175646 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175657 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175666 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175677 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175687 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175700 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175711 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175722 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175733 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175745 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175758 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175772 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175783 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175855 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175869 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175880 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175891 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175904 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175939 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175950 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175961 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175971 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175981 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.175992 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176004 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176017 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176027 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176037 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176061 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176072 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176080 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176092 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176125 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176135 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176146 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176183 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176194 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176205 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176219 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176229 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176238 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176250 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176270 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176280 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176291 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176302 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176313 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176349 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176365 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176376 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176388 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176399 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176412 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176423 4866 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176434 4866 reconstruct.go:97] "Volume reconstruction finished" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.176444 4866 reconciler.go:26] "Reconciler: start to sync state" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.184312 4866 manager.go:324] Recovery completed Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.192754 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.194328 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.194364 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.194374 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.195070 4866 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.195092 4866 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.195113 4866 state_mem.go:36] "Initialized new in-memory state store" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.208061 4866 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.211875 4866 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.211939 4866 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.211973 4866 kubelet.go:2335] "Starting kubelet main sync loop" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.212019 4866 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 22:16:42 crc kubenswrapper[4866]: W1213 22:16:42.213024 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.213196 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.243514 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.312717 4866 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.344020 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.348122 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.444793 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.501566 4866 policy_none.go:49] "None policy: Start" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.502797 4866 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 22:16:42 crc kubenswrapper[4866]: I1213 22:16:42.502854 4866 state_mem.go:35] "Initializing new in-memory state store" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.513410 4866 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.545118 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.646109 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.746246 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.748761 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.846920 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.914162 4866 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 22:16:42 crc kubenswrapper[4866]: E1213 22:16:42.947970 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.048388 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: W1213 22:16:43.049138 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.049232 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.144174 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.149472 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.250010 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: W1213 22:16:43.315814 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.315949 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.351213 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.451376 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.550307 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.551467 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: W1213 22:16:43.591652 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.591775 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.651955 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.696025 4866 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880e63ca92ea7f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 22:16:42.140067828 +0000 UTC m=+0.181406380,LastTimestamp:2025-12-13 22:16:42.140067828 +0000 UTC m=+0.181406380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 22:16:43 crc kubenswrapper[4866]: W1213 22:16:43.701460 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.701568 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.715004 4866 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.752550 4866 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820026 4866 manager.go:334] "Starting Device Plugin manager" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820094 4866 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820109 4866 server.go:79] "Starting device plugin registration server" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820508 4866 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820531 4866 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820757 4866 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820837 4866 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.820845 4866 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.826689 4866 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.921402 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.922907 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.922935 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.922943 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:43 crc kubenswrapper[4866]: I1213 22:16:43.922970 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:16:43 crc kubenswrapper[4866]: E1213 22:16:43.923366 4866 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.123928 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.125414 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.125588 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.125739 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.125886 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:16:44 crc kubenswrapper[4866]: E1213 22:16:44.126587 4866 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.138812 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 22:16:44 crc kubenswrapper[4866]: E1213 22:16:44.140189 4866 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.144214 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.527174 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.528773 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.528899 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.528971 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:44 crc kubenswrapper[4866]: I1213 22:16:44.529067 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:16:44 crc kubenswrapper[4866]: E1213 22:16:44.529655 4866 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.143571 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:45 crc kubenswrapper[4866]: E1213 22:16:45.151328 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.315166 4866 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.315296 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.316275 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.316312 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.316328 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.316463 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317040 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317092 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317063 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317165 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317174 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317258 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317443 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317580 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317796 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317816 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317825 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317905 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317951 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.317962 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.318229 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.318330 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.318367 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.318842 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.318868 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.318876 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319298 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319326 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319337 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319326 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319374 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319393 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319434 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319521 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.319553 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320069 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320087 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320097 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320218 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320236 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320377 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320392 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320399 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320694 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320712 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.320720 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.330472 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.331108 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.331201 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.331306 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.331408 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:16:45 crc kubenswrapper[4866]: E1213 22:16:45.331988 4866 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416672 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416726 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416751 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416770 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416796 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416822 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416843 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416862 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.416968 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.417098 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.417128 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.417156 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.417183 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.417213 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.417276 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518347 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518385 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518404 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518418 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518436 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518453 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518477 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518492 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518511 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518525 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518540 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518536 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518571 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518600 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518601 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518669 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518633 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518700 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518701 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518728 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518684 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518651 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518668 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518627 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518564 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518667 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518772 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518784 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518839 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.518782 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.523106 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:45 crc kubenswrapper[4866]: E1213 22:16:45.523199 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.650903 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.657852 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.680327 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.696608 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e486e71c14f8462ed9a2aff4541f6490b45fddeabe37cdedb160f39a5f4baf5d WatchSource:0}: Error finding container e486e71c14f8462ed9a2aff4541f6490b45fddeabe37cdedb160f39a5f4baf5d: Status 404 returned error can't find the container with id e486e71c14f8462ed9a2aff4541f6490b45fddeabe37cdedb160f39a5f4baf5d Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.697386 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2223bed8a7817eb8c997a12684927e122223559247e4c3bc3f9f98261aa4ffd1 WatchSource:0}: Error finding container 2223bed8a7817eb8c997a12684927e122223559247e4c3bc3f9f98261aa4ffd1: Status 404 returned error can't find the container with id 2223bed8a7817eb8c997a12684927e122223559247e4c3bc3f9f98261aa4ffd1 Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.700891 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0ceffcd84d38319fad039b57f87ddf5f93ff1535bc818924deccaf5d5bf442cb WatchSource:0}: Error finding container 0ceffcd84d38319fad039b57f87ddf5f93ff1535bc818924deccaf5d5bf442cb: Status 404 returned error can't find the container with id 0ceffcd84d38319fad039b57f87ddf5f93ff1535bc818924deccaf5d5bf442cb Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.701724 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: I1213 22:16:45.707884 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.713498 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c8b9f647570410dcc41e4ce52b5970922c51683e31f42d0ba23ff689f128453b WatchSource:0}: Error finding container c8b9f647570410dcc41e4ce52b5970922c51683e31f42d0ba23ff689f128453b: Status 404 returned error can't find the container with id c8b9f647570410dcc41e4ce52b5970922c51683e31f42d0ba23ff689f128453b Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.726300 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:45 crc kubenswrapper[4866]: E1213 22:16:45.726405 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:45 crc kubenswrapper[4866]: W1213 22:16:45.910942 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c383a6e259394b763ffcd030a1bc9686ac46c2ec8189c3ac4e346acf82d9117b WatchSource:0}: Error finding container c383a6e259394b763ffcd030a1bc9686ac46c2ec8189c3ac4e346acf82d9117b: Status 404 returned error can't find the container with id c383a6e259394b763ffcd030a1bc9686ac46c2ec8189c3ac4e346acf82d9117b Dec 13 22:16:46 crc kubenswrapper[4866]: W1213 22:16:46.092421 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:46 crc kubenswrapper[4866]: E1213 22:16:46.092524 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.143941 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.222331 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c383a6e259394b763ffcd030a1bc9686ac46c2ec8189c3ac4e346acf82d9117b"} Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.223766 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8b9f647570410dcc41e4ce52b5970922c51683e31f42d0ba23ff689f128453b"} Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.224653 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ceffcd84d38319fad039b57f87ddf5f93ff1535bc818924deccaf5d5bf442cb"} Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.225322 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e486e71c14f8462ed9a2aff4541f6490b45fddeabe37cdedb160f39a5f4baf5d"} Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.225910 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2223bed8a7817eb8c997a12684927e122223559247e4c3bc3f9f98261aa4ffd1"} Dec 13 22:16:46 crc kubenswrapper[4866]: W1213 22:16:46.277200 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:46 crc kubenswrapper[4866]: E1213 22:16:46.277334 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.933563 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.936012 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.936143 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.936159 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:46 crc kubenswrapper[4866]: I1213 22:16:46.936216 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:16:46 crc kubenswrapper[4866]: E1213 22:16:46.937230 4866 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.143757 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.229370 4866 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="41244d6a7ffdbe1ce0a5ffce3b59b126d0b26cfd981c68fe1f151a901eca1614" exitCode=0 Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.229432 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"41244d6a7ffdbe1ce0a5ffce3b59b126d0b26cfd981c68fe1f151a901eca1614"} Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.229520 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.230262 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.230289 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.230297 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.232182 4866 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="03b4d576a0d95e48a93ffacd03b28956524a6af320b4f75863c69eb384f01d8f" exitCode=0 Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.232231 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"03b4d576a0d95e48a93ffacd03b28956524a6af320b4f75863c69eb384f01d8f"} Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.232309 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.233179 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.233204 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.233214 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.235140 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c"} Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.235166 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5"} Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.236579 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd" exitCode=0 Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.236652 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd"} Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.236771 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.239956 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.239990 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.240007 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.240629 4866 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4" exitCode=0 Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.240658 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4"} Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.240769 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.242452 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.242695 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.242707 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.247463 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.248156 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.248175 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:47 crc kubenswrapper[4866]: I1213 22:16:47.248184 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.143887 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.235371 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 22:16:48 crc kubenswrapper[4866]: E1213 22:16:48.236571 4866 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.244099 4866 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837" exitCode=0 Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.244198 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837"} Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.244249 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.244954 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.244983 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.244993 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.246341 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8217c219b5afe19a629a00ba5c013df7716860238597dea4a97e3c263d74ba74"} Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.251146 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4b696647f75d1725c6836f0d377606f239b4704f61edf94acc87366e6d536e43"} Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.253362 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b"} Dec 13 22:16:48 crc kubenswrapper[4866]: I1213 22:16:48.255324 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074"} Dec 13 22:16:48 crc kubenswrapper[4866]: E1213 22:16:48.352412 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="6.4s" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.260909 4866 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6" exitCode=0 Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.260997 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6"} Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.261076 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.262043 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.262081 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.262093 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.264914 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.264947 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.265368 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81"} Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.265834 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.265856 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.265864 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.266426 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.266473 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:49 crc kubenswrapper[4866]: I1213 22:16:49.266488 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.001708 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.137646 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.140479 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.140535 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.140552 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.140588 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.270800 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a7bbed91f0e5a78deddbb6976da0e24433b1a8dd6aafdfc849748f5c8ef8cf3d"} Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.274515 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688"} Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.280820 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864"} Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.280940 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.282333 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.282377 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:50 crc kubenswrapper[4866]: I1213 22:16:50.282398 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:51 crc kubenswrapper[4866]: I1213 22:16:51.290352 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ea38f537631a639374cfef97b6fe12700e04fe753b9904895cf4ede827372848"} Dec 13 22:16:51 crc kubenswrapper[4866]: I1213 22:16:51.290376 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:51 crc kubenswrapper[4866]: I1213 22:16:51.292039 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:51 crc kubenswrapper[4866]: I1213 22:16:51.292278 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:51 crc kubenswrapper[4866]: I1213 22:16:51.292299 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:51 crc kubenswrapper[4866]: I1213 22:16:51.459192 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.297295 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.297365 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.297387 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.297624 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.299110 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.299158 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.299175 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.306431 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.306499 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.306514 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.306520 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.306663 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.307370 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.306527 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77"} Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.308203 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.308255 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.308278 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.309196 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.309238 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.309253 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.310042 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.310104 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.310119 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.611929 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:52 crc kubenswrapper[4866]: I1213 22:16:52.765463 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.293123 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.309150 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.309181 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.309153 4866 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.309274 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310144 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310175 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310194 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310392 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310417 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310429 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310487 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310499 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.310509 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:53 crc kubenswrapper[4866]: I1213 22:16:53.585841 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:53 crc kubenswrapper[4866]: E1213 22:16:53.826783 4866 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.312137 4866 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.313154 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.312251 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.314992 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.315084 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.315090 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.315111 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.315138 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:54 crc kubenswrapper[4866]: I1213 22:16:54.315157 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:55 crc kubenswrapper[4866]: I1213 22:16:55.313737 4866 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 22:16:55 crc kubenswrapper[4866]: I1213 22:16:55.313789 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:55 crc kubenswrapper[4866]: I1213 22:16:55.318117 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:55 crc kubenswrapper[4866]: I1213 22:16:55.318649 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:55 crc kubenswrapper[4866]: I1213 22:16:55.318673 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:55 crc kubenswrapper[4866]: I1213 22:16:55.391313 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.298004 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.298244 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.299508 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.299640 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.299874 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.316211 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.317818 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.317867 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.317892 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.350478 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.527181 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.527449 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.529615 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.529673 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.529697 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:56 crc kubenswrapper[4866]: I1213 22:16:56.534664 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:57 crc kubenswrapper[4866]: I1213 22:16:57.317901 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:57 crc kubenswrapper[4866]: I1213 22:16:57.318793 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:57 crc kubenswrapper[4866]: I1213 22:16:57.318925 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:57 crc kubenswrapper[4866]: I1213 22:16:57.319000 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:57 crc kubenswrapper[4866]: I1213 22:16:57.321473 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:58 crc kubenswrapper[4866]: I1213 22:16:58.242312 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:16:58 crc kubenswrapper[4866]: I1213 22:16:58.320438 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:58 crc kubenswrapper[4866]: I1213 22:16:58.323007 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:58 crc kubenswrapper[4866]: I1213 22:16:58.323101 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:58 crc kubenswrapper[4866]: I1213 22:16:58.323154 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:16:59 crc kubenswrapper[4866]: I1213 22:16:59.144732 4866 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 13 22:16:59 crc kubenswrapper[4866]: W1213 22:16:59.205718 4866 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 13 22:16:59 crc kubenswrapper[4866]: I1213 22:16:59.205853 4866 trace.go:236] Trace[2190878]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 22:16:49.203) (total time: 10001ms): Dec 13 22:16:59 crc kubenswrapper[4866]: Trace[2190878]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:16:59.205) Dec 13 22:16:59 crc kubenswrapper[4866]: Trace[2190878]: [10.001925702s] [10.001925702s] END Dec 13 22:16:59 crc kubenswrapper[4866]: E1213 22:16:59.205892 4866 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 13 22:16:59 crc kubenswrapper[4866]: I1213 22:16:59.323202 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:16:59 crc kubenswrapper[4866]: I1213 22:16:59.324661 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:16:59 crc kubenswrapper[4866]: I1213 22:16:59.324726 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:16:59 crc kubenswrapper[4866]: I1213 22:16:59.324746 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:00 crc kubenswrapper[4866]: E1213 22:17:00.141642 4866 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 13 22:17:00 crc kubenswrapper[4866]: I1213 22:17:00.439306 4866 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 13 22:17:00 crc kubenswrapper[4866]: I1213 22:17:00.439384 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 22:17:00 crc kubenswrapper[4866]: I1213 22:17:00.453716 4866 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 13 22:17:00 crc kubenswrapper[4866]: I1213 22:17:00.453794 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 22:17:01 crc kubenswrapper[4866]: I1213 22:17:01.242456 4866 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 22:17:01 crc kubenswrapper[4866]: I1213 22:17:01.242526 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.617747 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.617949 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.618318 4866 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.618384 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.620717 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.620823 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.620847 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:02 crc kubenswrapper[4866]: I1213 22:17:02.626705 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:17:03 crc kubenswrapper[4866]: I1213 22:17:03.331890 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:17:03 crc kubenswrapper[4866]: I1213 22:17:03.332437 4866 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 13 22:17:03 crc kubenswrapper[4866]: I1213 22:17:03.332514 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 13 22:17:03 crc kubenswrapper[4866]: I1213 22:17:03.333515 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:03 crc kubenswrapper[4866]: I1213 22:17:03.333581 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:03 crc kubenswrapper[4866]: I1213 22:17:03.333605 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:03 crc kubenswrapper[4866]: E1213 22:17:03.826947 4866 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.392532 4866 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.392595 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.431398 4866 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 13 22:17:05 crc kubenswrapper[4866]: E1213 22:17:05.444324 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.462302 4866 trace.go:236] Trace[1505182911]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 22:16:50.669) (total time: 14792ms): Dec 13 22:17:05 crc kubenswrapper[4866]: Trace[1505182911]: ---"Objects listed" error: 14792ms (22:17:05.462) Dec 13 22:17:05 crc kubenswrapper[4866]: Trace[1505182911]: [14.792867978s] [14.792867978s] END Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.462339 4866 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.462444 4866 trace.go:236] Trace[1958734221]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 22:16:51.514) (total time: 13947ms): Dec 13 22:17:05 crc kubenswrapper[4866]: Trace[1958734221]: ---"Objects listed" error: 13947ms (22:17:05.462) Dec 13 22:17:05 crc kubenswrapper[4866]: Trace[1958734221]: [13.947896113s] [13.947896113s] END Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.462468 4866 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.463958 4866 trace.go:236] Trace[861299304]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 22:16:50.519) (total time: 14944ms): Dec 13 22:17:05 crc kubenswrapper[4866]: Trace[861299304]: ---"Objects listed" error: 14944ms (22:17:05.463) Dec 13 22:17:05 crc kubenswrapper[4866]: Trace[861299304]: [14.944814846s] [14.944814846s] END Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.463979 4866 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.464351 4866 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.788483 4866 csr.go:261] certificate signing request csr-7jkkx is approved, waiting to be issued Dec 13 22:17:05 crc kubenswrapper[4866]: I1213 22:17:05.867021 4866 csr.go:257] certificate signing request csr-7jkkx is issued Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.147557 4866 apiserver.go:52] "Watching apiserver" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.153256 4866 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.153536 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.153976 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.153999 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.154090 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.154160 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.154262 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.154535 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.154726 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.154772 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.154824 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.160020 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.160030 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.160083 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.160098 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.160375 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.162087 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.163253 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.163276 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.163468 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.188160 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.206777 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.227768 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.244171 4866 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.245801 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.247896 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-265zz"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.248212 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-g6nd6"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.248394 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.248457 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.249642 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2855n"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.250100 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.253538 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.253814 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.253895 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.254133 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.254554 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.254791 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.254958 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.255163 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.255435 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.255728 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.259728 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.259731 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.259888 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268324 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268456 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268540 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268588 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268757 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268848 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.268875 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:06.768856785 +0000 UTC m=+24.810195337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269009 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269105 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269183 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268881 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.268890 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269250 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269362 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269400 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269446 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269470 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269500 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269520 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269537 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269554 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269573 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269581 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269591 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269590 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269649 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269676 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269702 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269726 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269751 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269779 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269804 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269831 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269852 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269876 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269900 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269922 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269946 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269969 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269994 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270017 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270040 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270102 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270125 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270148 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270170 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270197 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270223 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270246 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270300 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270326 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270352 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270375 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270396 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270420 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270448 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270470 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270493 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270516 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270539 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270561 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270584 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270605 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270627 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270653 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270676 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270700 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270745 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269801 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270770 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.269883 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270034 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270245 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270254 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270237 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270814 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270840 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270865 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270905 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270927 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270950 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270997 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271023 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271065 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271091 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271114 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271135 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271156 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271184 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271211 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271235 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271259 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271285 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271313 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271338 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271356 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271374 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271392 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271409 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271425 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271443 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271461 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271477 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271495 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271513 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271529 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271546 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271564 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271580 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271601 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271623 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271646 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271672 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271697 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271717 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271734 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271757 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271784 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271807 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271825 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271843 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271859 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271874 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271891 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271922 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271939 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271954 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271972 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271989 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272007 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272023 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272094 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272383 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272428 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272445 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272461 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272480 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272496 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272512 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272529 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272546 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272563 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272579 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272596 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272611 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272629 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272645 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272669 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272692 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272712 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272732 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272748 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272768 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272785 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272801 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272817 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272834 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272857 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272880 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272902 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272919 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272939 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272956 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272972 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272989 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273005 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273024 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273040 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273100 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273119 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273136 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273152 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273169 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273184 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273201 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273216 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273232 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273249 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273267 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273285 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273302 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273320 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273338 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273355 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273376 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273392 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273413 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273436 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273456 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273479 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273501 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273524 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273545 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273561 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273578 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273595 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273613 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273630 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273646 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273663 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273682 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273698 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273716 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273732 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273749 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273766 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273783 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273799 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273841 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273869 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273899 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273921 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273942 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273961 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273998 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274016 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274035 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274092 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274110 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274133 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274151 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274173 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274224 4866 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274235 4866 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274245 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274254 4866 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274265 4866 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274274 4866 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274283 4866 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274293 4866 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274303 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274314 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274323 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274332 4866 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270276 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270468 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270482 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270610 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270663 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270759 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270752 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.270857 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271010 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271012 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271158 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271263 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271357 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271410 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271441 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271543 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271583 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271649 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271688 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271918 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.271945 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272065 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272376 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272584 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272765 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.272935 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.273919 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274159 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274485 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.274814 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.275066 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.275289 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.275431 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.275478 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.275500 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.275632 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.276107 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.276190 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.276517 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.276595 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.276681 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.276955 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.277162 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.277278 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.278076 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.278246 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.278403 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.278733 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.279064 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.279211 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.279528 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.279564 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.279748 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.280277 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.280335 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.280664 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.280813 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.281023 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.281114 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.281591 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.281849 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.282086 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.282300 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.282378 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.282535 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.282691 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.283143 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.283389 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.283587 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.283750 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.284190 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.284390 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285292 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285369 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.284644 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.284861 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.284880 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285030 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285423 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285534 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285617 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285719 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.285868 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.286012 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.286075 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.286274 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.286444 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.286503 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.290384 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.290465 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.290770 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.291123 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.291224 4866 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.291393 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.291865 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.292007 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.292013 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.292340 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.292524 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.292715 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.292892 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.293024 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.293158 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.293305 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.293705 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.293796 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294077 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294120 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294227 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294325 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294338 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.294346 4866 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294675 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294718 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.294692 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:06.794668122 +0000 UTC m=+24.836006674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.294481 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295095 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295224 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295594 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295662 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295677 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295542 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.295771 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.296198 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.296306 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.296519 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.296714 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297097 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297252 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297273 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297479 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297521 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297597 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.297937 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.298715 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.299082 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.303285 4866 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.303438 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:06.803420298 +0000 UTC m=+24.844758840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.304817 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.309237 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.309435 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.309692 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.309979 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.310001 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.310014 4866 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.310079 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:06.810064424 +0000 UTC m=+24.851402976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.319126 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.319874 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.319896 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.319910 4866 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.319968 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:06.819949587 +0000 UTC m=+24.861288189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.321294 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.321376 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.321640 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.321832 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.321932 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.322267 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.322572 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.324101 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.324520 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.324601 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.324734 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.325888 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.326439 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.324541 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.327597 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.327654 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.327827 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.327902 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.327998 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.328157 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.328365 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.328391 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.329432 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.329594 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.331393 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.332214 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.332873 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.333804 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.334120 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.334611 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.334628 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.335505 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.335720 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.336772 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.341294 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.341686 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.343839 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.344040 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.352784 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.352853 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.354781 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.355667 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.356004 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.359361 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.359458 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.359707 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.359771 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.359796 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.365835 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.371340 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377149 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-etc-kubernetes\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377185 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-rootfs\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377200 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-mcd-auth-proxy-config\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377219 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c32dec92-1cb0-4596-b551-b35b25f09692-hosts-file\") pod \"node-resolver-265zz\" (UID: \"c32dec92-1cb0-4596-b551-b35b25f09692\") " pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377670 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377842 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-cnibin\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.377938 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-socket-dir-parent\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378015 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92zd4\" (UniqueName: \"kubernetes.io/projected/c32dec92-1cb0-4596-b551-b35b25f09692-kube-api-access-92zd4\") pod \"node-resolver-265zz\" (UID: \"c32dec92-1cb0-4596-b551-b35b25f09692\") " pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378101 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-system-cni-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378104 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378173 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378232 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-netns\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378271 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-cni-bin\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378292 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-conf-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378308 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-proxy-tls\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378323 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-cni-binary-copy\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378338 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-cni-multus\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378357 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-kubelet\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378375 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp56q\" (UniqueName: \"kubernetes.io/projected/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-kube-api-access-qp56q\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378391 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-cni-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378407 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-os-release\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378427 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378452 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-daemon-config\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378471 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-hostroot\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378485 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-multus-certs\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378499 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpjk\" (UniqueName: \"kubernetes.io/projected/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-kube-api-access-2vpjk\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378534 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-k8s-cni-cncf-io\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378685 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378739 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378758 4866 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378768 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378777 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378786 4866 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378811 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378820 4866 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378829 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378839 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378849 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378858 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378867 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378875 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378883 4866 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378891 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378900 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378909 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378918 4866 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378926 4866 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378934 4866 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378943 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378951 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378960 4866 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378969 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378977 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378985 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.378993 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379000 4866 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379009 4866 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379017 4866 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379025 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379095 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379105 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379114 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379123 4866 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379133 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379143 4866 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379151 4866 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379166 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379175 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379183 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379191 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379199 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379207 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379216 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379225 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379233 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379241 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379249 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379257 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379264 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379272 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379280 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379287 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379297 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379305 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379313 4866 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379321 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379329 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379338 4866 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379347 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379357 4866 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379365 4866 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379382 4866 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379389 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379397 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379405 4866 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379412 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379420 4866 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379428 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379436 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379444 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379452 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379460 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379467 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379475 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379483 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379490 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379498 4866 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379506 4866 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379514 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379522 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379530 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379538 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379546 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379554 4866 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379562 4866 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379570 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379578 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379586 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379595 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379606 4866 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379617 4866 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379627 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379637 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379648 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379658 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379668 4866 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379679 4866 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379687 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379695 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379703 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379712 4866 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379720 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379728 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379750 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379759 4866 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379769 4866 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379777 4866 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379787 4866 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379796 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379804 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379812 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379819 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379827 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379835 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379843 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379851 4866 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379859 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379866 4866 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379874 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379884 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379892 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379910 4866 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379919 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379928 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379935 4866 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379943 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379951 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379958 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379969 4866 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379977 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379985 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.379994 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380003 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380011 4866 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380019 4866 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380027 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380035 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380044 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380066 4866 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380076 4866 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380094 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380103 4866 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380112 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380120 4866 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380129 4866 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380137 4866 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380145 4866 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380153 4866 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380161 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380169 4866 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380179 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380186 4866 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380194 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380202 4866 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380210 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380217 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380225 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380235 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380254 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380263 4866 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380271 4866 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380281 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380289 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380297 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380304 4866 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380312 4866 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380319 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380327 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380335 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380343 4866 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380351 4866 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380359 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380369 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380376 4866 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380384 4866 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380392 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380399 4866 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.380530 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.381160 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.386678 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.390169 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.394826 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969" exitCode=255 Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.394873 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.398980 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.399584 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.403689 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.406194 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.408628 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.420379 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.428950 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.456880 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.457646 4866 scope.go:117] "RemoveContainer" containerID="c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.461003 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.466466 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.471839 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490360 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-hostroot\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490425 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-multus-certs\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490443 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpjk\" (UniqueName: \"kubernetes.io/projected/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-kube-api-access-2vpjk\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490460 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-k8s-cni-cncf-io\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490450 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490545 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c32dec92-1cb0-4596-b551-b35b25f09692-hosts-file\") pod \"node-resolver-265zz\" (UID: \"c32dec92-1cb0-4596-b551-b35b25f09692\") " pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490613 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-hostroot\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490659 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-multus-certs\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490474 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c32dec92-1cb0-4596-b551-b35b25f09692-hosts-file\") pod \"node-resolver-265zz\" (UID: \"c32dec92-1cb0-4596-b551-b35b25f09692\") " pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490911 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-etc-kubernetes\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.490986 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-rootfs\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491015 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-mcd-auth-proxy-config\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491134 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-cnibin\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491160 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-socket-dir-parent\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491181 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92zd4\" (UniqueName: \"kubernetes.io/projected/c32dec92-1cb0-4596-b551-b35b25f09692-kube-api-access-92zd4\") pod \"node-resolver-265zz\" (UID: \"c32dec92-1cb0-4596-b551-b35b25f09692\") " pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491244 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-system-cni-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491447 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-cni-bin\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491522 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-netns\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491581 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-conf-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491609 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-proxy-tls\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491630 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-cni-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491691 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-os-release\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491869 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-system-cni-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491957 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-etc-kubernetes\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491998 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-cni-bin\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491954 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-conf-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492033 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-rootfs\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492100 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-netns\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.491716 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-cni-binary-copy\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492289 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-os-release\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492323 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-cnibin\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492455 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-socket-dir-parent\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492485 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492567 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-cni-dir\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492593 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-cni-multus\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492609 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-kubelet\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492643 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp56q\" (UniqueName: \"kubernetes.io/projected/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-kube-api-access-qp56q\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492665 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-kubelet\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.492677 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-var-lib-cni-multus\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.493071 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-daemon-config\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.493339 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-host-run-k8s-cni-cncf-io\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.493465 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.493494 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.493513 4866 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.493617 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-multus-daemon-config\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.494350 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-mcd-auth-proxy-config\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.497227 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-cni-binary-copy\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.504510 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-proxy-tls\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.513838 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:06 crc kubenswrapper[4866]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 13 22:17:06 crc kubenswrapper[4866]: if [[ -f "/env/_master" ]]; then Dec 13 22:17:06 crc kubenswrapper[4866]: set -o allexport Dec 13 22:17:06 crc kubenswrapper[4866]: source "/env/_master" Dec 13 22:17:06 crc kubenswrapper[4866]: set +o allexport Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 13 22:17:06 crc kubenswrapper[4866]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 13 22:17:06 crc kubenswrapper[4866]: ho_enable="--enable-hybrid-overlay" Dec 13 22:17:06 crc kubenswrapper[4866]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 13 22:17:06 crc kubenswrapper[4866]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 13 22:17:06 crc kubenswrapper[4866]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 13 22:17:06 crc kubenswrapper[4866]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 13 22:17:06 crc kubenswrapper[4866]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 13 22:17:06 crc kubenswrapper[4866]: --webhook-host=127.0.0.1 \ Dec 13 22:17:06 crc kubenswrapper[4866]: --webhook-port=9743 \ Dec 13 22:17:06 crc kubenswrapper[4866]: ${ho_enable} \ Dec 13 22:17:06 crc kubenswrapper[4866]: --enable-interconnect \ Dec 13 22:17:06 crc kubenswrapper[4866]: --disable-approver \ Dec 13 22:17:06 crc kubenswrapper[4866]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 13 22:17:06 crc kubenswrapper[4866]: --wait-for-kubernetes-api=200s \ Dec 13 22:17:06 crc kubenswrapper[4866]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 13 22:17:06 crc kubenswrapper[4866]: --loglevel="${LOGLEVEL}" Dec 13 22:17:06 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:06 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.518633 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:06 crc kubenswrapper[4866]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 13 22:17:06 crc kubenswrapper[4866]: if [[ -f "/env/_master" ]]; then Dec 13 22:17:06 crc kubenswrapper[4866]: set -o allexport Dec 13 22:17:06 crc kubenswrapper[4866]: source "/env/_master" Dec 13 22:17:06 crc kubenswrapper[4866]: set +o allexport Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 13 22:17:06 crc kubenswrapper[4866]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 13 22:17:06 crc kubenswrapper[4866]: --disable-webhook \ Dec 13 22:17:06 crc kubenswrapper[4866]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 13 22:17:06 crc kubenswrapper[4866]: --loglevel="${LOGLEVEL}" Dec 13 22:17:06 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:06 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.519295 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.519926 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.520376 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp56q\" (UniqueName: \"kubernetes.io/projected/9749ec6a-aa76-4ae0-a9d0-453edbf21bca-kube-api-access-qp56q\") pod \"machine-config-daemon-2855n\" (UID: \"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\") " pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.521181 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92zd4\" (UniqueName: \"kubernetes.io/projected/c32dec92-1cb0-4596-b551-b35b25f09692-kube-api-access-92zd4\") pod \"node-resolver-265zz\" (UID: \"c32dec92-1cb0-4596-b551-b35b25f09692\") " pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.526499 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:06 crc kubenswrapper[4866]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Dec 13 22:17:06 crc kubenswrapper[4866]: set -o allexport Dec 13 22:17:06 crc kubenswrapper[4866]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 13 22:17:06 crc kubenswrapper[4866]: source /etc/kubernetes/apiserver-url.env Dec 13 22:17:06 crc kubenswrapper[4866]: else Dec 13 22:17:06 crc kubenswrapper[4866]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 13 22:17:06 crc kubenswrapper[4866]: exit 1 Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 13 22:17:06 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:06 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.527580 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.531398 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.532781 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.533747 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpjk\" (UniqueName: \"kubernetes.io/projected/7d8d0363-27e3-4269-8bf3-33fd2cf3af5c-kube-api-access-2vpjk\") pod \"multus-g6nd6\" (UID: \"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\") " pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.542538 4866 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.542683 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.546268 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.546297 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.546306 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.546399 4866 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.562958 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-265zz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.570376 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.570434 4866 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.570702 4866 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.571862 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.571902 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.571913 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.571926 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.571936 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.574129 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g6nd6" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.577081 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.585795 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.593581 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.600491 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.603318 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.603358 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.603370 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.603389 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.603399 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.606968 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:06 crc kubenswrapper[4866]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 13 22:17:06 crc kubenswrapper[4866]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 13 22:17:06 crc kubenswrapper[4866]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g6nd6_openshift-multus(7d8d0363-27e3-4269-8bf3-33fd2cf3af5c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:06 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.608078 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g6nd6" podUID="7d8d0363-27e3-4269-8bf3-33fd2cf3af5c" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.615376 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:06 crc kubenswrapper[4866]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Dec 13 22:17:06 crc kubenswrapper[4866]: set -uo pipefail Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 13 22:17:06 crc kubenswrapper[4866]: HOSTS_FILE="/etc/hosts" Dec 13 22:17:06 crc kubenswrapper[4866]: TEMP_FILE="/etc/hosts.tmp" Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: # Make a temporary file with the old hosts file's attributes. Dec 13 22:17:06 crc kubenswrapper[4866]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 13 22:17:06 crc kubenswrapper[4866]: echo "Failed to preserve hosts file. Exiting." Dec 13 22:17:06 crc kubenswrapper[4866]: exit 1 Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: while true; do Dec 13 22:17:06 crc kubenswrapper[4866]: declare -A svc_ips Dec 13 22:17:06 crc kubenswrapper[4866]: for svc in "${services[@]}"; do Dec 13 22:17:06 crc kubenswrapper[4866]: # Fetch service IP from cluster dns if present. We make several tries Dec 13 22:17:06 crc kubenswrapper[4866]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 13 22:17:06 crc kubenswrapper[4866]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 13 22:17:06 crc kubenswrapper[4866]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 13 22:17:06 crc kubenswrapper[4866]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 13 22:17:06 crc kubenswrapper[4866]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 13 22:17:06 crc kubenswrapper[4866]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 13 22:17:06 crc kubenswrapper[4866]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 13 22:17:06 crc kubenswrapper[4866]: for i in ${!cmds[*]} Dec 13 22:17:06 crc kubenswrapper[4866]: do Dec 13 22:17:06 crc kubenswrapper[4866]: ips=($(eval "${cmds[i]}")) Dec 13 22:17:06 crc kubenswrapper[4866]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 13 22:17:06 crc kubenswrapper[4866]: svc_ips["${svc}"]="${ips[@]}" Dec 13 22:17:06 crc kubenswrapper[4866]: break Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: done Dec 13 22:17:06 crc kubenswrapper[4866]: done Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: # Update /etc/hosts only if we get valid service IPs Dec 13 22:17:06 crc kubenswrapper[4866]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 13 22:17:06 crc kubenswrapper[4866]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 13 22:17:06 crc kubenswrapper[4866]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 13 22:17:06 crc kubenswrapper[4866]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 13 22:17:06 crc kubenswrapper[4866]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 13 22:17:06 crc kubenswrapper[4866]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 13 22:17:06 crc kubenswrapper[4866]: sleep 60 & wait Dec 13 22:17:06 crc kubenswrapper[4866]: continue Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: # Append resolver entries for services Dec 13 22:17:06 crc kubenswrapper[4866]: rc=0 Dec 13 22:17:06 crc kubenswrapper[4866]: for svc in "${!svc_ips[@]}"; do Dec 13 22:17:06 crc kubenswrapper[4866]: for ip in ${svc_ips[${svc}]}; do Dec 13 22:17:06 crc kubenswrapper[4866]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 13 22:17:06 crc kubenswrapper[4866]: done Dec 13 22:17:06 crc kubenswrapper[4866]: done Dec 13 22:17:06 crc kubenswrapper[4866]: if [[ $rc -ne 0 ]]; then Dec 13 22:17:06 crc kubenswrapper[4866]: sleep 60 & wait Dec 13 22:17:06 crc kubenswrapper[4866]: continue Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: Dec 13 22:17:06 crc kubenswrapper[4866]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 13 22:17:06 crc kubenswrapper[4866]: # Replace /etc/hosts with our modified version if needed Dec 13 22:17:06 crc kubenswrapper[4866]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 13 22:17:06 crc kubenswrapper[4866]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 13 22:17:06 crc kubenswrapper[4866]: fi Dec 13 22:17:06 crc kubenswrapper[4866]: sleep 60 & wait Dec 13 22:17:06 crc kubenswrapper[4866]: unset svc_ips Dec 13 22:17:06 crc kubenswrapper[4866]: done Dec 13 22:17:06 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92zd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-265zz_openshift-dns(c32dec92-1cb0-4596-b551-b35b25f09692): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:06 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.617796 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.617859 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-265zz" podUID="c32dec92-1cb0-4596-b551-b35b25f09692" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.628581 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.628830 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.633293 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp56q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-2855n_openshift-machine-config-operator(9749ec6a-aa76-4ae0-a9d0-453edbf21bca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.634513 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.634538 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.634545 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.634576 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.634587 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.640139 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp56q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-2855n_openshift-machine-config-operator(9749ec6a-aa76-4ae0-a9d0-453edbf21bca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.641349 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.643834 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.656099 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.657132 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.660924 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.660953 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.660969 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.660984 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.660994 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.669841 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mh6xz"] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.670480 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.673922 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.674148 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.677751 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.677975 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.684976 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.685011 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.685020 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.685036 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.685057 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.696957 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.701994 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.702153 4866 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.705882 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.705906 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.705914 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.705946 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.705956 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.710241 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.722264 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.733092 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.743611 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795292 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795729 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795872 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-os-release\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795898 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cni-binary-copy\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795918 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cnibin\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795952 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-system-cni-dir\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795969 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cxj\" (UniqueName: \"kubernetes.io/projected/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-kube-api-access-78cxj\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.795998 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.796033 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.796089 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.796419 4866 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.796451 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:07.796427076 +0000 UTC m=+25.837765628 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.796477 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:07.796469957 +0000 UTC m=+25.837808509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.808123 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.808179 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.808190 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.808208 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.808237 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.822110 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.838819 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.850812 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.864310 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.867732 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-13 22:12:05 +0000 UTC, rotation deadline is 2026-10-07 06:25:21.142270342 +0000 UTC Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.867794 4866 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7136h8m14.274477748s for next certificate rotation Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.878716 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.893501 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.896784 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.896868 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.897016 4866 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.897147 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.897168 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.897179 4866 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.897243 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:07.897229957 +0000 UTC m=+25.938568509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897574 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.897605 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:07.897594775 +0000 UTC m=+25.938933327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897643 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897689 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-os-release\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897723 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cni-binary-copy\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897768 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cnibin\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.898584 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-system-cni-dir\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.898602 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78cxj\" (UniqueName: \"kubernetes.io/projected/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-kube-api-access-78cxj\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.898881 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.898966 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.898983 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.899014 4866 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897832 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-os-release\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: E1213 22:17:06.899038 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:07.899030669 +0000 UTC m=+25.940369211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.898525 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cni-binary-copy\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.897870 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-cnibin\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.898643 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-system-cni-dir\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.899125 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.899445 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.909916 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.909950 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.909961 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.909978 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.909989 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:06Z","lastTransitionTime":"2025-12-13T22:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.914323 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.921246 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78cxj\" (UniqueName: \"kubernetes.io/projected/9a704d7c-0ecc-4fb7-96d5-180353c3bf59-kube-api-access-78cxj\") pod \"multus-additional-cni-plugins-mh6xz\" (UID: \"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\") " pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.931230 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.951704 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:06 crc kubenswrapper[4866]: I1213 22:17:06.981780 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.020838 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78cxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-mh6xz_openshift-multus(9a704d7c-0ecc-4fb7-96d5-180353c3bf59): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.021322 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.021346 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.021356 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.021374 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.021386 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.022804 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" podUID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.026788 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.098463 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zrmrs"] Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.099357 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.105160 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.106081 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.106710 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.106817 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.106905 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.107064 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.107591 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111170 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-bin\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111204 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-node-log\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111220 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-config\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111236 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-slash\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111249 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-etc-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111264 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flqm\" (UniqueName: \"kubernetes.io/projected/b977f313-87b4-4173-9263-91bc45047631-kube-api-access-8flqm\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111280 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-var-lib-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111295 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-ovn-kubernetes\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111309 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-systemd-units\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111372 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-systemd\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111388 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-netns\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111400 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-ovn\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111431 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-kubelet\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111452 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111468 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-env-overrides\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111485 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b977f313-87b4-4173-9263-91bc45047631-ovn-node-metrics-cert\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111500 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-script-lib\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111515 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-log-socket\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111528 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-netd\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.111543 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.125969 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.125993 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.126010 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.126025 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.126034 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.131587 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.142365 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.181754 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.211991 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212322 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-node-log\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212206 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212257 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212467 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-node-log\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212415 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-bin\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212554 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-slash\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.212567 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212602 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-config\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212627 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-etc-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212649 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8flqm\" (UniqueName: \"kubernetes.io/projected/b977f313-87b4-4173-9263-91bc45047631-kube-api-access-8flqm\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212688 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-var-lib-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212711 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-ovn-kubernetes\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212730 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-systemd-units\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212777 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-systemd\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212797 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-netns\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212821 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-ovn\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212870 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-kubelet\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212896 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212931 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-env-overrides\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212948 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b977f313-87b4-4173-9263-91bc45047631-ovn-node-metrics-cert\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212964 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-script-lib\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212974 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-systemd-units\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.212995 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-log-socket\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213025 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-log-socket\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213042 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-netd\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213074 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-systemd\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213146 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-netd\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213159 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-netns\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213185 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-ovn\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213185 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-slash\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213236 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-kubelet\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213259 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213925 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-config\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213974 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-var-lib-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.213997 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-ovn-kubernetes\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.214450 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-script-lib\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.214490 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-etc-openvswitch\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.214718 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-env-overrides\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.214784 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-bin\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.218333 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.218427 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b977f313-87b4-4173-9263-91bc45047631-ovn-node-metrics-cert\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.228577 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.228629 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.228641 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.228655 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.228664 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.232003 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8flqm\" (UniqueName: \"kubernetes.io/projected/b977f313-87b4-4173-9263-91bc45047631-kube-api-access-8flqm\") pod \"ovnkube-node-zrmrs\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.235178 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.242700 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.252170 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.261896 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.279598 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.291326 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.310472 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.324684 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.331281 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.331337 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.331350 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.331366 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.331380 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.333333 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.345775 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.398101 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"0e5387692dd67e8e2d7d43858316eea31d3030b6a1d7a3f11fae99b7d0dcebe2"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.400631 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-265zz" event={"ID":"c32dec92-1cb0-4596-b551-b35b25f09692","Type":"ContainerStarted","Data":"c73a8e4d8f38a357789c1f9317a8c246221681c2dff4303283dc196e38b30796"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.402241 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b240b419119749f3dbe1d2020cd555f85416a708ce83e956589d64a6d2dbb6fe"} Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.403809 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.404153 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp56q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-2855n_openshift-machine-config-operator(9749ec6a-aa76-4ae0-a9d0-453edbf21bca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.404245 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:07 crc kubenswrapper[4866]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Dec 13 22:17:07 crc kubenswrapper[4866]: set -uo pipefail Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 13 22:17:07 crc kubenswrapper[4866]: HOSTS_FILE="/etc/hosts" Dec 13 22:17:07 crc kubenswrapper[4866]: TEMP_FILE="/etc/hosts.tmp" Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: # Make a temporary file with the old hosts file's attributes. Dec 13 22:17:07 crc kubenswrapper[4866]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 13 22:17:07 crc kubenswrapper[4866]: echo "Failed to preserve hosts file. Exiting." Dec 13 22:17:07 crc kubenswrapper[4866]: exit 1 Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: while true; do Dec 13 22:17:07 crc kubenswrapper[4866]: declare -A svc_ips Dec 13 22:17:07 crc kubenswrapper[4866]: for svc in "${services[@]}"; do Dec 13 22:17:07 crc kubenswrapper[4866]: # Fetch service IP from cluster dns if present. We make several tries Dec 13 22:17:07 crc kubenswrapper[4866]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 13 22:17:07 crc kubenswrapper[4866]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 13 22:17:07 crc kubenswrapper[4866]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 13 22:17:07 crc kubenswrapper[4866]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 13 22:17:07 crc kubenswrapper[4866]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 13 22:17:07 crc kubenswrapper[4866]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 13 22:17:07 crc kubenswrapper[4866]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 13 22:17:07 crc kubenswrapper[4866]: for i in ${!cmds[*]} Dec 13 22:17:07 crc kubenswrapper[4866]: do Dec 13 22:17:07 crc kubenswrapper[4866]: ips=($(eval "${cmds[i]}")) Dec 13 22:17:07 crc kubenswrapper[4866]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 13 22:17:07 crc kubenswrapper[4866]: svc_ips["${svc}"]="${ips[@]}" Dec 13 22:17:07 crc kubenswrapper[4866]: break Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: done Dec 13 22:17:07 crc kubenswrapper[4866]: done Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: # Update /etc/hosts only if we get valid service IPs Dec 13 22:17:07 crc kubenswrapper[4866]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 13 22:17:07 crc kubenswrapper[4866]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 13 22:17:07 crc kubenswrapper[4866]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 13 22:17:07 crc kubenswrapper[4866]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 13 22:17:07 crc kubenswrapper[4866]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 13 22:17:07 crc kubenswrapper[4866]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 13 22:17:07 crc kubenswrapper[4866]: sleep 60 & wait Dec 13 22:17:07 crc kubenswrapper[4866]: continue Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: # Append resolver entries for services Dec 13 22:17:07 crc kubenswrapper[4866]: rc=0 Dec 13 22:17:07 crc kubenswrapper[4866]: for svc in "${!svc_ips[@]}"; do Dec 13 22:17:07 crc kubenswrapper[4866]: for ip in ${svc_ips[${svc}]}; do Dec 13 22:17:07 crc kubenswrapper[4866]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 13 22:17:07 crc kubenswrapper[4866]: done Dec 13 22:17:07 crc kubenswrapper[4866]: done Dec 13 22:17:07 crc kubenswrapper[4866]: if [[ $rc -ne 0 ]]; then Dec 13 22:17:07 crc kubenswrapper[4866]: sleep 60 & wait Dec 13 22:17:07 crc kubenswrapper[4866]: continue Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 13 22:17:07 crc kubenswrapper[4866]: # Replace /etc/hosts with our modified version if needed Dec 13 22:17:07 crc kubenswrapper[4866]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 13 22:17:07 crc kubenswrapper[4866]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: sleep 60 & wait Dec 13 22:17:07 crc kubenswrapper[4866]: unset svc_ips Dec 13 22:17:07 crc kubenswrapper[4866]: done Dec 13 22:17:07 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92zd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-265zz_openshift-dns(c32dec92-1cb0-4596-b551-b35b25f09692): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:07 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.404909 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9015de326887b228bca4fb5dcfca5f4e9095fe9fe1ad61e144f73051ba3f2d7f"} Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.405300 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-265zz" podUID="c32dec92-1cb0-4596-b551-b35b25f09692" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.406314 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp56q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-2855n_openshift-machine-config-operator(9749ec6a-aa76-4ae0-a9d0-453edbf21bca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.406440 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:07 crc kubenswrapper[4866]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Dec 13 22:17:07 crc kubenswrapper[4866]: set -o allexport Dec 13 22:17:07 crc kubenswrapper[4866]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 13 22:17:07 crc kubenswrapper[4866]: source /etc/kubernetes/apiserver-url.env Dec 13 22:17:07 crc kubenswrapper[4866]: else Dec 13 22:17:07 crc kubenswrapper[4866]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 13 22:17:07 crc kubenswrapper[4866]: exit 1 Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 13 22:17:07 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:07 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.406191 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.407041 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerStarted","Data":"e1c960be74639881277d8a02aa8aa6d028edf0e3f8102972c9d2a16b30b78419"} Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.407440 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.407507 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.408735 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78cxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-mh6xz_openshift-multus(9a704d7c-0ecc-4fb7-96d5-180353c3bf59): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.408910 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g6nd6" event={"ID":"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c","Type":"ContainerStarted","Data":"3e409fa369f530896b0d6dbd3c768bb748fbccb752d4727fcfd918256a8ab800"} Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.410093 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" podUID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.410348 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cb613d123f3d791c7ac4fcfcc0896972b3134cde0e9bd4fa5e2928be50c00521"} Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.410366 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:07 crc kubenswrapper[4866]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 13 22:17:07 crc kubenswrapper[4866]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 13 22:17:07 crc kubenswrapper[4866]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g6nd6_openshift-multus(7d8d0363-27e3-4269-8bf3-33fd2cf3af5c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:07 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.411458 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g6nd6" podUID="7d8d0363-27e3-4269-8bf3-33fd2cf3af5c" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.411543 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:07 crc kubenswrapper[4866]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 13 22:17:07 crc kubenswrapper[4866]: if [[ -f "/env/_master" ]]; then Dec 13 22:17:07 crc kubenswrapper[4866]: set -o allexport Dec 13 22:17:07 crc kubenswrapper[4866]: source "/env/_master" Dec 13 22:17:07 crc kubenswrapper[4866]: set +o allexport Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 13 22:17:07 crc kubenswrapper[4866]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 13 22:17:07 crc kubenswrapper[4866]: ho_enable="--enable-hybrid-overlay" Dec 13 22:17:07 crc kubenswrapper[4866]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 13 22:17:07 crc kubenswrapper[4866]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 13 22:17:07 crc kubenswrapper[4866]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 13 22:17:07 crc kubenswrapper[4866]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 13 22:17:07 crc kubenswrapper[4866]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 13 22:17:07 crc kubenswrapper[4866]: --webhook-host=127.0.0.1 \ Dec 13 22:17:07 crc kubenswrapper[4866]: --webhook-port=9743 \ Dec 13 22:17:07 crc kubenswrapper[4866]: ${ho_enable} \ Dec 13 22:17:07 crc kubenswrapper[4866]: --enable-interconnect \ Dec 13 22:17:07 crc kubenswrapper[4866]: --disable-approver \ Dec 13 22:17:07 crc kubenswrapper[4866]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 13 22:17:07 crc kubenswrapper[4866]: --wait-for-kubernetes-api=200s \ Dec 13 22:17:07 crc kubenswrapper[4866]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 13 22:17:07 crc kubenswrapper[4866]: --loglevel="${LOGLEVEL}" Dec 13 22:17:07 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:07 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.412671 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.413196 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.421710 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.422813 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:07 crc kubenswrapper[4866]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 13 22:17:07 crc kubenswrapper[4866]: if [[ -f "/env/_master" ]]; then Dec 13 22:17:07 crc kubenswrapper[4866]: set -o allexport Dec 13 22:17:07 crc kubenswrapper[4866]: source "/env/_master" Dec 13 22:17:07 crc kubenswrapper[4866]: set +o allexport Dec 13 22:17:07 crc kubenswrapper[4866]: fi Dec 13 22:17:07 crc kubenswrapper[4866]: Dec 13 22:17:07 crc kubenswrapper[4866]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 13 22:17:07 crc kubenswrapper[4866]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 13 22:17:07 crc kubenswrapper[4866]: --disable-webhook \ Dec 13 22:17:07 crc kubenswrapper[4866]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 13 22:17:07 crc kubenswrapper[4866]: --loglevel="${LOGLEVEL}" Dec 13 22:17:07 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:07 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.424019 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.442992 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.443034 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.444583 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.446475 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.446501 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.446509 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.446521 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.446531 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: W1213 22:17:07.446662 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb977f313_87b4_4173_9263_91bc45047631.slice/crio-5d791af58011d92d7db58ac7570aef1c803edc5fd691e1ba8b21a05e423bfeab WatchSource:0}: Error finding container 5d791af58011d92d7db58ac7570aef1c803edc5fd691e1ba8b21a05e423bfeab: Status 404 returned error can't find the container with id 5d791af58011d92d7db58ac7570aef1c803edc5fd691e1ba8b21a05e423bfeab Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.450920 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:07 crc kubenswrapper[4866]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 13 22:17:07 crc kubenswrapper[4866]: apiVersion: v1 Dec 13 22:17:07 crc kubenswrapper[4866]: clusters: Dec 13 22:17:07 crc kubenswrapper[4866]: - cluster: Dec 13 22:17:07 crc kubenswrapper[4866]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 13 22:17:07 crc kubenswrapper[4866]: server: https://api-int.crc.testing:6443 Dec 13 22:17:07 crc kubenswrapper[4866]: name: default-cluster Dec 13 22:17:07 crc kubenswrapper[4866]: contexts: Dec 13 22:17:07 crc kubenswrapper[4866]: - context: Dec 13 22:17:07 crc kubenswrapper[4866]: cluster: default-cluster Dec 13 22:17:07 crc kubenswrapper[4866]: namespace: default Dec 13 22:17:07 crc kubenswrapper[4866]: user: default-auth Dec 13 22:17:07 crc kubenswrapper[4866]: name: default-context Dec 13 22:17:07 crc kubenswrapper[4866]: current-context: default-context Dec 13 22:17:07 crc kubenswrapper[4866]: kind: Config Dec 13 22:17:07 crc kubenswrapper[4866]: preferences: {} Dec 13 22:17:07 crc kubenswrapper[4866]: users: Dec 13 22:17:07 crc kubenswrapper[4866]: - name: default-auth Dec 13 22:17:07 crc kubenswrapper[4866]: user: Dec 13 22:17:07 crc kubenswrapper[4866]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 13 22:17:07 crc kubenswrapper[4866]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 13 22:17:07 crc kubenswrapper[4866]: EOF Dec 13 22:17:07 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8flqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-zrmrs_openshift-ovn-kubernetes(b977f313-87b4-4173-9263-91bc45047631): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:07 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.453615 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.456282 4866 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.468649 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.496460 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.512096 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.520345 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.533766 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.542932 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.548644 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.548667 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.548684 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.548697 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.548706 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.553729 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.567481 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.580636 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.619042 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.650171 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.650204 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.650215 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.650231 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.650243 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.658720 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.701413 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.737063 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.753556 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.753587 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.753595 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.753639 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.753660 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.782200 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.812729 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.816997 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.817089 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.817143 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:09.817125137 +0000 UTC m=+27.858463689 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.817148 4866 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.817180 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:09.817172968 +0000 UTC m=+27.858511530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.855998 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.856030 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.856041 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.856071 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.856083 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.858264 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.921170 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.921229 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.921259 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.921347 4866 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.921394 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:09.921379469 +0000 UTC m=+27.962718021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922213 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922231 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922241 4866 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922264 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:09.92225727 +0000 UTC m=+27.963595822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922303 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922312 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922319 4866 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:07 crc kubenswrapper[4866]: E1213 22:17:07.922336 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:09.922330481 +0000 UTC m=+27.963669033 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.946262 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.957953 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.957981 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.957989 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.958001 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.958010 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:07Z","lastTransitionTime":"2025-12-13T22:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.971402 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:07 crc kubenswrapper[4866]: I1213 22:17:07.984760 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.023191 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.059945 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.059975 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.059983 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.059996 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.060006 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.067353 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.099176 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.135170 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.162631 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.162667 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.162685 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.162701 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.162713 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.174938 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.212452 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:08 crc kubenswrapper[4866]: E1213 22:17:08.212565 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.213069 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:08 crc kubenswrapper[4866]: E1213 22:17:08.213170 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.221333 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.222034 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.223131 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.223784 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.225780 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.226460 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.227211 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.228760 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.229648 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.230810 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.231412 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.232798 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.233333 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.233856 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.234785 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.235690 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.237339 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.237741 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.238347 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.239539 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.240033 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.241095 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.241630 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.242755 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.243344 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.244284 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.246004 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.246711 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.247642 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.248260 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.249347 4866 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.249474 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.251350 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.252526 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.253006 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.254837 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.255566 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.256441 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.257221 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.258339 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.258955 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.259998 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.260780 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.262312 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.262873 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.264965 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.265316 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.265349 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.265361 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.265379 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.265391 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.266798 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.267756 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.268343 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.268886 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.269489 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.270116 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.270798 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.271596 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.272496 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.278625 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.280907 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.285338 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.295304 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.315631 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.355480 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.367804 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.367995 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.368096 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.368186 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.368248 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.395269 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.441364 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.444076 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"5d791af58011d92d7db58ac7570aef1c803edc5fd691e1ba8b21a05e423bfeab"} Dec 13 22:17:08 crc kubenswrapper[4866]: E1213 22:17:08.445728 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:08 crc kubenswrapper[4866]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 13 22:17:08 crc kubenswrapper[4866]: apiVersion: v1 Dec 13 22:17:08 crc kubenswrapper[4866]: clusters: Dec 13 22:17:08 crc kubenswrapper[4866]: - cluster: Dec 13 22:17:08 crc kubenswrapper[4866]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 13 22:17:08 crc kubenswrapper[4866]: server: https://api-int.crc.testing:6443 Dec 13 22:17:08 crc kubenswrapper[4866]: name: default-cluster Dec 13 22:17:08 crc kubenswrapper[4866]: contexts: Dec 13 22:17:08 crc kubenswrapper[4866]: - context: Dec 13 22:17:08 crc kubenswrapper[4866]: cluster: default-cluster Dec 13 22:17:08 crc kubenswrapper[4866]: namespace: default Dec 13 22:17:08 crc kubenswrapper[4866]: user: default-auth Dec 13 22:17:08 crc kubenswrapper[4866]: name: default-context Dec 13 22:17:08 crc kubenswrapper[4866]: current-context: default-context Dec 13 22:17:08 crc kubenswrapper[4866]: kind: Config Dec 13 22:17:08 crc kubenswrapper[4866]: preferences: {} Dec 13 22:17:08 crc kubenswrapper[4866]: users: Dec 13 22:17:08 crc kubenswrapper[4866]: - name: default-auth Dec 13 22:17:08 crc kubenswrapper[4866]: user: Dec 13 22:17:08 crc kubenswrapper[4866]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 13 22:17:08 crc kubenswrapper[4866]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 13 22:17:08 crc kubenswrapper[4866]: EOF Dec 13 22:17:08 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8flqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-zrmrs_openshift-ovn-kubernetes(b977f313-87b4-4173-9263-91bc45047631): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:08 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:08 crc kubenswrapper[4866]: E1213 22:17:08.446900 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.470319 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.470357 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.470366 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.470382 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.470391 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: E1213 22:17:08.473620 4866 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.501142 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.536535 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.572989 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.573027 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.573037 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.573065 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.573077 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.573470 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.620647 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.661240 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.675608 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.675668 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.675696 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.675740 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.675763 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.697837 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.743711 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.776532 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.778230 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.778279 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.778296 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.778317 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.778334 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.815610 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.854220 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.880308 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.880353 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.880365 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.880383 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.880395 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.895590 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.937673 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.974954 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.982654 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.982789 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.982846 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.982906 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:08 crc kubenswrapper[4866]: I1213 22:17:08.982969 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:08Z","lastTransitionTime":"2025-12-13T22:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.016008 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-htw2b"] Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.016455 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.025629 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.029102 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.029368 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhb67\" (UniqueName: \"kubernetes.io/projected/cd122e00-4561-49cd-9477-2517a6094fb5-kube-api-access-fhb67\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.029502 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd122e00-4561-49cd-9477-2517a6094fb5-host\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.029610 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cd122e00-4561-49cd-9477-2517a6094fb5-serviceca\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.048766 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.070246 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.084800 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.084962 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.085068 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.085187 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.085290 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.088985 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.130977 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhb67\" (UniqueName: \"kubernetes.io/projected/cd122e00-4561-49cd-9477-2517a6094fb5-kube-api-access-fhb67\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.131018 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd122e00-4561-49cd-9477-2517a6094fb5-host\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.131058 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cd122e00-4561-49cd-9477-2517a6094fb5-serviceca\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.131154 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd122e00-4561-49cd-9477-2517a6094fb5-host\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.131871 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cd122e00-4561-49cd-9477-2517a6094fb5-serviceca\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.143877 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.163285 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhb67\" (UniqueName: \"kubernetes.io/projected/cd122e00-4561-49cd-9477-2517a6094fb5-kube-api-access-fhb67\") pod \"node-ca-htw2b\" (UID: \"cd122e00-4561-49cd-9477-2517a6094fb5\") " pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.187677 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.187718 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.187729 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.187747 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.187758 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.197376 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.213098 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.213218 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.235510 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.276299 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.290479 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.290517 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.290529 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.290547 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.290560 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.315534 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.328986 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-htw2b" Dec 13 22:17:09 crc kubenswrapper[4866]: W1213 22:17:09.341265 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd122e00_4561_49cd_9477_2517a6094fb5.slice/crio-eb66ed2cec7c665a7dfaec914e3ddf518833d06fba4292fc50f524e8d38d271c WatchSource:0}: Error finding container eb66ed2cec7c665a7dfaec914e3ddf518833d06fba4292fc50f524e8d38d271c: Status 404 returned error can't find the container with id eb66ed2cec7c665a7dfaec914e3ddf518833d06fba4292fc50f524e8d38d271c Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.347035 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:09 crc kubenswrapper[4866]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 13 22:17:09 crc kubenswrapper[4866]: while [ true ]; Dec 13 22:17:09 crc kubenswrapper[4866]: do Dec 13 22:17:09 crc kubenswrapper[4866]: for f in $(ls /tmp/serviceca); do Dec 13 22:17:09 crc kubenswrapper[4866]: echo $f Dec 13 22:17:09 crc kubenswrapper[4866]: ca_file_path="/tmp/serviceca/${f}" Dec 13 22:17:09 crc kubenswrapper[4866]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 13 22:17:09 crc kubenswrapper[4866]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 13 22:17:09 crc kubenswrapper[4866]: if [ -e "${reg_dir_path}" ]; then Dec 13 22:17:09 crc kubenswrapper[4866]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 13 22:17:09 crc kubenswrapper[4866]: else Dec 13 22:17:09 crc kubenswrapper[4866]: mkdir $reg_dir_path Dec 13 22:17:09 crc kubenswrapper[4866]: cp $ca_file_path $reg_dir_path/ca.crt Dec 13 22:17:09 crc kubenswrapper[4866]: fi Dec 13 22:17:09 crc kubenswrapper[4866]: done Dec 13 22:17:09 crc kubenswrapper[4866]: for d in $(ls /etc/docker/certs.d); do Dec 13 22:17:09 crc kubenswrapper[4866]: echo $d Dec 13 22:17:09 crc kubenswrapper[4866]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 13 22:17:09 crc kubenswrapper[4866]: reg_conf_path="/tmp/serviceca/${dp}" Dec 13 22:17:09 crc kubenswrapper[4866]: if [ ! -e "${reg_conf_path}" ]; then Dec 13 22:17:09 crc kubenswrapper[4866]: rm -rf /etc/docker/certs.d/$d Dec 13 22:17:09 crc kubenswrapper[4866]: fi Dec 13 22:17:09 crc kubenswrapper[4866]: done Dec 13 22:17:09 crc kubenswrapper[4866]: sleep 60 & wait ${!} Dec 13 22:17:09 crc kubenswrapper[4866]: done Dec 13 22:17:09 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhb67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-htw2b_openshift-image-registry(cd122e00-4561-49cd-9477-2517a6094fb5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:09 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.348304 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-htw2b" podUID="cd122e00-4561-49cd-9477-2517a6094fb5" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.354392 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.392931 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.392981 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.392995 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.393017 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.393033 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.398930 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.434479 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.447172 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-htw2b" event={"ID":"cd122e00-4561-49cd-9477-2517a6094fb5","Type":"ContainerStarted","Data":"eb66ed2cec7c665a7dfaec914e3ddf518833d06fba4292fc50f524e8d38d271c"} Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.448705 4866 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 13 22:17:09 crc kubenswrapper[4866]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 13 22:17:09 crc kubenswrapper[4866]: while [ true ]; Dec 13 22:17:09 crc kubenswrapper[4866]: do Dec 13 22:17:09 crc kubenswrapper[4866]: for f in $(ls /tmp/serviceca); do Dec 13 22:17:09 crc kubenswrapper[4866]: echo $f Dec 13 22:17:09 crc kubenswrapper[4866]: ca_file_path="/tmp/serviceca/${f}" Dec 13 22:17:09 crc kubenswrapper[4866]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 13 22:17:09 crc kubenswrapper[4866]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 13 22:17:09 crc kubenswrapper[4866]: if [ -e "${reg_dir_path}" ]; then Dec 13 22:17:09 crc kubenswrapper[4866]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 13 22:17:09 crc kubenswrapper[4866]: else Dec 13 22:17:09 crc kubenswrapper[4866]: mkdir $reg_dir_path Dec 13 22:17:09 crc kubenswrapper[4866]: cp $ca_file_path $reg_dir_path/ca.crt Dec 13 22:17:09 crc kubenswrapper[4866]: fi Dec 13 22:17:09 crc kubenswrapper[4866]: done Dec 13 22:17:09 crc kubenswrapper[4866]: for d in $(ls /etc/docker/certs.d); do Dec 13 22:17:09 crc kubenswrapper[4866]: echo $d Dec 13 22:17:09 crc kubenswrapper[4866]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 13 22:17:09 crc kubenswrapper[4866]: reg_conf_path="/tmp/serviceca/${dp}" Dec 13 22:17:09 crc kubenswrapper[4866]: if [ ! -e "${reg_conf_path}" ]; then Dec 13 22:17:09 crc kubenswrapper[4866]: rm -rf /etc/docker/certs.d/$d Dec 13 22:17:09 crc kubenswrapper[4866]: fi Dec 13 22:17:09 crc kubenswrapper[4866]: done Dec 13 22:17:09 crc kubenswrapper[4866]: sleep 60 & wait ${!} Dec 13 22:17:09 crc kubenswrapper[4866]: done Dec 13 22:17:09 crc kubenswrapper[4866]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhb67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-htw2b_openshift-image-registry(cd122e00-4561-49cd-9477-2517a6094fb5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 13 22:17:09 crc kubenswrapper[4866]: > logger="UnhandledError" Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.449889 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-htw2b" podUID="cd122e00-4561-49cd-9477-2517a6094fb5" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.474748 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.495319 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.495345 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.495352 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.495443 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.495454 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.513929 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.554837 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.597264 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.597299 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.597308 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.597339 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.597347 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.597441 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.635253 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.679988 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.699928 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.699968 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.699981 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.699995 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.700003 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.713384 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.764276 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.796522 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.804222 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.804272 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.804292 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.804311 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.804323 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.833597 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.837076 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.837176 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:13.837155485 +0000 UTC m=+31.878494037 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.837241 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.837311 4866 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.837380 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:13.83736002 +0000 UTC m=+31.878698582 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.874627 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.906851 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.906901 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.906912 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.906928 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.906939 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:09Z","lastTransitionTime":"2025-12-13T22:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.915212 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.938493 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.938545 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.938570 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938678 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938698 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938710 4866 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938731 4866 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938753 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:13.938738085 +0000 UTC m=+31.980076637 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938774 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:13.938762786 +0000 UTC m=+31.980101338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938678 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938793 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938803 4866 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:09 crc kubenswrapper[4866]: E1213 22:17:09.938828 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:13.938820017 +0000 UTC m=+31.980158569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.954870 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:09 crc kubenswrapper[4866]: I1213 22:17:09.999240 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.009141 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.009174 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.009184 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.009198 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.009210 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.037252 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.080397 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.110774 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.110816 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.110836 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.110854 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.110865 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.114845 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.156184 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.200999 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212358 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:10 crc kubenswrapper[4866]: E1213 22:17:10.212467 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212555 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:10 crc kubenswrapper[4866]: E1213 22:17:10.212686 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212823 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212857 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212873 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212889 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.212903 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.236728 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.273265 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.314648 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.314681 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.314690 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.314702 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.314711 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.316260 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.355133 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.398572 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.416447 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.416494 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.416507 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.416525 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.416536 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.442759 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.475539 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.515314 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.518678 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.518773 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.518783 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.518949 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.518965 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.561176 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.597951 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.621626 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.621654 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.621663 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.621676 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.621685 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.723470 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.723506 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.723515 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.723528 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.723538 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.825672 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.825702 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.825710 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.825723 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.825733 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.928367 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.928396 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.928413 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.928426 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:10 crc kubenswrapper[4866]: I1213 22:17:10.928435 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:10Z","lastTransitionTime":"2025-12-13T22:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.031450 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.031486 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.031493 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.031507 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.031515 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.133640 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.133679 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.133690 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.133722 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.133733 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.213130 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:11 crc kubenswrapper[4866]: E1213 22:17:11.213249 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.240014 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.240072 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.240083 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.240098 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.240109 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.341902 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.341959 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.341974 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.341995 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.342011 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.443931 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.443969 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.443977 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.443991 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.443999 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.460822 4866 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.546704 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.546751 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.546763 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.546781 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.546793 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.649419 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.649454 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.649464 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.649477 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.649487 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.751584 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.751636 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.751650 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.751668 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.751682 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.853703 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.853755 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.853770 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.853792 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.853807 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.956020 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.956070 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.956082 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.956107 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:11 crc kubenswrapper[4866]: I1213 22:17:11.956120 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:11Z","lastTransitionTime":"2025-12-13T22:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.057839 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.057881 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.057896 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.057913 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.057925 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.078203 4866 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 22:17:12 crc kubenswrapper[4866]: W1213 22:17:12.080123 4866 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.159971 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.160003 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.160013 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.160027 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.160037 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.212320 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.212320 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:12 crc kubenswrapper[4866]: E1213 22:17:12.212452 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:12 crc kubenswrapper[4866]: E1213 22:17:12.212508 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.222916 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.230954 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.240222 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.252269 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.260273 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.261430 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.261460 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.261471 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.261486 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.261505 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.268474 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.275000 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.283061 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.290187 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.304944 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.311858 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.326986 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.343199 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.350189 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.360093 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.363711 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.363748 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.363757 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.363770 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.363780 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.466549 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.466604 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.466623 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.466647 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.466665 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.569171 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.569223 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.569243 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.569264 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.569278 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.672409 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.672459 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.672470 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.672487 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.672500 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.774268 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.774311 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.774325 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.774368 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.774380 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.877469 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.877542 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.877567 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.877597 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.877618 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.981022 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.981152 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.981175 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.981204 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:12 crc kubenswrapper[4866]: I1213 22:17:12.981226 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:12Z","lastTransitionTime":"2025-12-13T22:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.083527 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.083597 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.083621 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.083650 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.083672 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.186838 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.186919 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.186944 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.186975 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.186997 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.212598 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.212837 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.290544 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.290626 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.290650 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.290679 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.290707 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.393783 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.393834 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.393845 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.393863 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.393874 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.496321 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.496365 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.496377 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.496393 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.496404 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.598950 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.599041 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.599116 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.599149 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.599171 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.703328 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.703423 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.703447 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.703476 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.703498 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.806390 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.806435 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.806452 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.806469 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.806479 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.878535 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.878710 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.878848 4866 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.878937 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:21.878912264 +0000 UTC m=+39.920250856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.879170 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:21.87914138 +0000 UTC m=+39.920479932 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.909088 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.909155 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.909165 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.909180 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.909191 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:13Z","lastTransitionTime":"2025-12-13T22:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.979653 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.979699 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:13 crc kubenswrapper[4866]: I1213 22:17:13.979731 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.979854 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.979873 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.979886 4866 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.979929 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:21.97991407 +0000 UTC m=+40.021252622 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.979987 4866 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.980111 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.980122 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.980130 4866 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.980152 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:21.980145445 +0000 UTC m=+40.021483997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:13 crc kubenswrapper[4866]: E1213 22:17:13.980178 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:21.980173416 +0000 UTC m=+40.021511968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.011453 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.011495 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.011505 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.011520 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.011531 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.114337 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.114380 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.114392 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.114409 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.114421 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.212827 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:14 crc kubenswrapper[4866]: E1213 22:17:14.212962 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.212827 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:14 crc kubenswrapper[4866]: E1213 22:17:14.213308 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.217423 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.217644 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.217737 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.217840 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.217903 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.320536 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.320592 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.320601 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.320615 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.320624 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.422900 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.422949 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.422958 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.422972 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.422981 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.525679 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.525726 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.525738 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.525751 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.525760 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.628196 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.628252 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.628268 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.628288 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.628303 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.730909 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.730946 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.730955 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.730970 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.730979 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.833861 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.833901 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.833911 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.833924 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.833933 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.936489 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.936546 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.936563 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.936586 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:14 crc kubenswrapper[4866]: I1213 22:17:14.936604 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:14Z","lastTransitionTime":"2025-12-13T22:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.038625 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.038663 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.038672 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.038690 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.038709 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.140533 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.140572 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.140583 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.140598 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.140609 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.212498 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:15 crc kubenswrapper[4866]: E1213 22:17:15.212640 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.242281 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.242314 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.242324 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.242338 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.242347 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.344428 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.344470 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.344482 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.344499 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.344511 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.446904 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.447184 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.447264 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.447353 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.447438 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.550062 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.550096 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.550106 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.550119 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.550129 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.651797 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.651846 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.651858 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.651874 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.651886 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.754141 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.754175 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.754184 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.754197 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.754205 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.856496 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.856536 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.856545 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.856559 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.856569 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.958040 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.958103 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.958119 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.958141 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:15 crc kubenswrapper[4866]: I1213 22:17:15.958152 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:15Z","lastTransitionTime":"2025-12-13T22:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.061794 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.061835 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.061844 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.061858 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.061870 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.163998 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.164034 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.164060 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.164075 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.164088 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.212809 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.212969 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.213223 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.213361 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.265933 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.266186 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.266266 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.266343 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.266409 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.368646 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.368673 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.368683 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.368697 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.368707 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.471150 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.471210 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.471232 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.471261 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.471282 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.573962 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.573995 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.574003 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.574024 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.574034 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.677029 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.677616 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.677702 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.677779 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.677849 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.781031 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.781089 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.781101 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.781117 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.781127 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.883310 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.883427 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.883452 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.883469 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.883478 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.884212 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.884235 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.884247 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.884260 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.884271 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.894787 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.899292 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.899359 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.899423 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.899540 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.899613 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.910377 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.914558 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.914612 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.914637 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.914666 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.914687 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.926826 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.932061 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.932125 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.932139 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.932156 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.932169 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.942612 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.945930 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.945963 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.945972 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.945986 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.945995 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.956733 4866 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8239d0-c705-4abb-ba7d-2dc5aaf26a3e\\\",\\\"systemUUID\\\":\\\"36cffe06-8718-49d3-bf76-a8b562df5fba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:16 crc kubenswrapper[4866]: E1213 22:17:16.956838 4866 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.986655 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.986747 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.986803 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.986826 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:16 crc kubenswrapper[4866]: I1213 22:17:16.986874 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:16Z","lastTransitionTime":"2025-12-13T22:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.090164 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.090218 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.090236 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.090259 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.090275 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.191909 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.191971 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.191992 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.192020 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.192041 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.212306 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:17 crc kubenswrapper[4866]: E1213 22:17:17.212542 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.294130 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.294342 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.294418 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.294498 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.294586 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.397266 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.397309 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.397317 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.397328 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.397336 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.500594 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.500649 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.500666 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.500689 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.500708 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.603385 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.603609 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.603714 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.603834 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.603906 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.706405 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.706445 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.706456 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.706472 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.706484 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.808978 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.809005 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.809014 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.809026 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.809034 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.911764 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.911830 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.911852 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.911882 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:17 crc kubenswrapper[4866]: I1213 22:17:17.911904 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:17Z","lastTransitionTime":"2025-12-13T22:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.014628 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.014751 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.014777 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.014805 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.014827 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.117906 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.117971 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.117994 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.118027 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.118086 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.212753 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.212753 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:18 crc kubenswrapper[4866]: E1213 22:17:18.213153 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:18 crc kubenswrapper[4866]: E1213 22:17:18.213245 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.222191 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.222260 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.222286 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.222313 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.222334 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.324107 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.324402 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.324414 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.324430 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.324443 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.427131 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.427166 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.427179 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.427195 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.427206 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.479590 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.529678 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.529706 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.529714 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.529726 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.529735 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.631839 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.631866 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.631874 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.631887 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.631895 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.733958 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.733990 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.733998 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.734012 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.734020 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.836847 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.836879 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.836890 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.836908 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.836919 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.939852 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.939891 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.939904 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.939922 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.939932 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:18Z","lastTransitionTime":"2025-12-13T22:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.977958 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw"] Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.978359 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.980625 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.982012 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.987615 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:18 crc kubenswrapper[4866]: I1213 22:17:18.994873 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.003735 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.011459 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.019359 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.028684 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fggn\" (UniqueName: \"kubernetes.io/projected/b196cacb-2343-4909-8566-b77d46744231-kube-api-access-6fggn\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.028764 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b196cacb-2343-4909-8566-b77d46744231-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.028791 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b196cacb-2343-4909-8566-b77d46744231-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.028840 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b196cacb-2343-4909-8566-b77d46744231-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.033876 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.042397 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.042424 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.042432 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.042445 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.042454 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.043569 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.054274 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.064035 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.077957 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.084303 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.093695 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.102623 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.117211 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.129956 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b196cacb-2343-4909-8566-b77d46744231-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.130006 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b196cacb-2343-4909-8566-b77d46744231-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.130041 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fggn\" (UniqueName: \"kubernetes.io/projected/b196cacb-2343-4909-8566-b77d46744231-kube-api-access-6fggn\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.130077 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b196cacb-2343-4909-8566-b77d46744231-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.130676 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b196cacb-2343-4909-8566-b77d46744231-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.131095 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b196cacb-2343-4909-8566-b77d46744231-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.132218 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.134756 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b196cacb-2343-4909-8566-b77d46744231-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.140032 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.144654 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.144686 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.144697 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.144713 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.144724 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.149027 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fggn\" (UniqueName: \"kubernetes.io/projected/b196cacb-2343-4909-8566-b77d46744231-kube-api-access-6fggn\") pod \"ovnkube-control-plane-749d76644c-wxpgw\" (UID: \"b196cacb-2343-4909-8566-b77d46744231\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.212330 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:19 crc kubenswrapper[4866]: E1213 22:17:19.212637 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.247629 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.247655 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.247663 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.247676 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.247684 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.350006 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.350038 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.350062 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.350074 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.350082 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.452413 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.452442 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.452452 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.452464 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.452473 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.484001 4866 generic.go:334] "Generic (PLEG): container finished" podID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" containerID="4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567" exitCode=0 Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.484072 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerDied","Data":"4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.490303 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"b94cab8b6884ee87e178ab8d41f46afa62fc20bd3a7f7f4db6d93e3f923cca5b"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.494913 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.506878 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.517421 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.536138 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.546043 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.555667 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.557484 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.557527 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.557548 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.557572 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.557589 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.563964 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.570873 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.582553 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.594618 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.608204 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.619775 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.635697 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.646641 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.660420 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.662698 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.662728 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.662739 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.662753 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.662763 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.671227 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.680021 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.689350 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.701378 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.712485 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.722907 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.732520 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.741653 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.754213 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b94cab8b6884ee87e178ab8d41f46afa62fc20bd3a7f7f4db6d93e3f923cca5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.765272 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.765928 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.765978 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.766030 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.766074 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.766093 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.776793 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.799040 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.809330 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.821387 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.839117 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.856268 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.864880 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.868464 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.868509 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.868522 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.868543 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.868558 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.876493 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.971657 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.971702 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.971712 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.971728 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:19 crc kubenswrapper[4866]: I1213 22:17:19.971741 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:19Z","lastTransitionTime":"2025-12-13T22:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.047417 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sdd5b"] Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.047889 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.047957 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.057417 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.068024 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.074483 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.074511 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.074521 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.074534 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.074544 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.080493 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.089504 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b94cab8b6884ee87e178ab8d41f46afa62fc20bd3a7f7f4db6d93e3f923cca5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.100848 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.111685 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.120742 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.128027 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.135911 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.144734 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.146014 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.146096 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmbcd\" (UniqueName: \"kubernetes.io/projected/1d7692f7-4101-4c41-86f0-d8c2883110bf-kube-api-access-vmbcd\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.153275 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.171323 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.177255 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.177290 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.177303 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.177323 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.177334 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.179724 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.188641 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.195869 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d7692f7-4101-4c41-86f0-d8c2883110bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.212552 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.213481 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.213667 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.213751 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.213800 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.231397 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.246897 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmbcd\" (UniqueName: \"kubernetes.io/projected/1d7692f7-4101-4c41-86f0-d8c2883110bf-kube-api-access-vmbcd\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.247402 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.247491 4866 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.247550 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs podName:1d7692f7-4101-4c41-86f0-d8c2883110bf nodeName:}" failed. No retries permitted until 2025-12-13 22:17:20.747534161 +0000 UTC m=+38.788872713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs") pod "network-metrics-daemon-sdd5b" (UID: "1d7692f7-4101-4c41-86f0-d8c2883110bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.270593 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmbcd\" (UniqueName: \"kubernetes.io/projected/1d7692f7-4101-4c41-86f0-d8c2883110bf-kube-api-access-vmbcd\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.279601 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.279628 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.279637 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.279650 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.279659 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.381998 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.382035 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.382063 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.382081 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.382091 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.485028 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.485069 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.485079 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.485092 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.485101 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.494330 4866 generic.go:334] "Generic (PLEG): container finished" podID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" containerID="e503175de68b3aacf2f15673121b9cb615c5bf1cc65d3ac05518bb7d4b2186d7" exitCode=0 Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.494381 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerDied","Data":"e503175de68b3aacf2f15673121b9cb615c5bf1cc65d3ac05518bb7d4b2186d7"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.499826 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g6nd6" event={"ID":"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c","Type":"ContainerStarted","Data":"4f124a69a4f95d2c4d96ca6b9e0688a0301c08cfe32cffb613d7943057978efe"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.501194 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" exitCode=0 Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.501233 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.502504 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.503329 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0f5028b633a223104167ae055e3033226ec296010134d8bad677b5d7f15de0be"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.505465 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" event={"ID":"b196cacb-2343-4909-8566-b77d46744231","Type":"ContainerStarted","Data":"0671047977ec4f147a65166f71b1f8407722f9d181314ed5945689c9079b17b6"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.505488 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" event={"ID":"b196cacb-2343-4909-8566-b77d46744231","Type":"ContainerStarted","Data":"d0f040b4b6ebc10974cb19509fa7b6101a6c51581050694632acc59dbed97df5"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.505497 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" event={"ID":"b196cacb-2343-4909-8566-b77d46744231","Type":"ContainerStarted","Data":"da96f965968330b2bf4504be6034dcbd3b38bd9ed1349aa7be5bbcf17df3b71f"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.511564 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.529588 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.540025 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.555839 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.564090 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.574579 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.584012 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d7692f7-4101-4c41-86f0-d8c2883110bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.587831 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.587856 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.587864 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.587878 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.587887 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.601188 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.611018 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.626824 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.636530 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.648227 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e503175de68b3aacf2f15673121b9cb615c5bf1cc65d3ac05518bb7d4b2186d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e503175de68b3aacf2f15673121b9cb615c5bf1cc65d3ac05518bb7d4b2186d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.664859 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b94cab8b6884ee87e178ab8d41f46afa62fc20bd3a7f7f4db6d93e3f923cca5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.676583 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.690341 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.690598 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.690616 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.690626 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.690642 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.690652 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.699006 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.710690 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9749ec6a-aa76-4ae0-a9d0-453edbf21bca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b94cab8b6884ee87e178ab8d41f46afa62fc20bd3a7f7f4db6d93e3f923cca5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qp56q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2855n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.721095 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf761bd-0d95-4826-ba0d-8caae359ecf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41791b1801c6e8c30ba4f05d06fce59632c94271c4db64853d8d511f13f9e26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00c91b085943293a414c3b4f69185ca44fdaa25c72fb1ecd1d4d8fa2c99ca55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f0ec9ef48f93fde225982567d343b2d6ce5582ba240a10aac915cb1c60ce81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.732662 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.748750 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.752435 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.752596 4866 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:20 crc kubenswrapper[4866]: E1213 22:17:20.752655 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs podName:1d7692f7-4101-4c41-86f0-d8c2883110bf nodeName:}" failed. No retries permitted until 2025-12-13 22:17:21.752639523 +0000 UTC m=+39.793978065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs") pod "network-metrics-daemon-sdd5b" (UID: "1d7692f7-4101-4c41-86f0-d8c2883110bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.756313 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.766533 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f040b4b6ebc10974cb19509fa7b6101a6c51581050694632acc59dbed97df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0671047977ec4f147a65166f71b1f8407722f9d181314ed5945689c9079b17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.776460 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5028b633a223104167ae055e3033226ec296010134d8bad677b5d7f15de0be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.789198 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.793184 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.793220 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.793228 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.793242 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.793250 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.808201 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.816463 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.831504 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f124a69a4f95d2c4d96ca6b9e0688a0301c08cfe32cffb613d7943057978efe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.842731 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d7692f7-4101-4c41-86f0-d8c2883110bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.889123 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.895806 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.895842 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.895852 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.895868 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.895878 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.913120 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.947592 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.991907 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.997333 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.997370 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.997379 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.997392 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:20 crc kubenswrapper[4866]: I1213 22:17:20.997401 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:20Z","lastTransitionTime":"2025-12-13T22:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.028426 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a704d7c-0ecc-4fb7-96d5-180353c3bf59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2031353aecc611e073433b53630ea2a6e0b0811164912dd5a62acf52ae567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e503175de68b3aacf2f15673121b9cb615c5bf1cc65d3ac05518bb7d4b2186d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e503175de68b3aacf2f15673121b9cb615c5bf1cc65d3ac05518bb7d4b2186d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78cxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mh6xz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.100022 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.100088 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.100104 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.100125 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.100137 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.202173 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.202225 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.202233 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.202246 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.202254 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.212748 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:21 crc kubenswrapper[4866]: E1213 22:17:21.213128 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.305200 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.305243 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.305256 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.305273 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.305285 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.407444 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.407488 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.407517 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.407532 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.407543 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.513700 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.513734 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.513743 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.513757 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.513767 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.516382 4866 generic.go:334] "Generic (PLEG): container finished" podID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" containerID="ac96b6ff6b2457b0d278ac18319f8fbf8cdf8d45db7e4c7106367c5509d23681" exitCode=0 Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.516449 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerDied","Data":"ac96b6ff6b2457b0d278ac18319f8fbf8cdf8d45db7e4c7106367c5509d23681"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.522487 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.522538 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.522550 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.522561 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.522573 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.522584 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.525776 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1fdbb502d026fcc387146d6d5fd99647271166192bfb23184e5b8a834126ca20"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.525817 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"59273411d6ea34ded027d2815c2b563a9cbb8e7ed1868f408300bf92a13a30b5"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.526843 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-265zz" event={"ID":"c32dec92-1cb0-4596-b551-b35b25f09692","Type":"ContainerStarted","Data":"25d2981531e6d023f98efd34339a1dceb72fa05b2d4cfa898049709fcca20d6c"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.533086 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5028b633a223104167ae055e3033226ec296010134d8bad677b5d7f15de0be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.542831 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.558878 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b977f313-87b4-4173-9263-91bc45047631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8flqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zrmrs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.567097 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-htw2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd122e00-4561-49cd-9477-2517a6094fb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhb67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-htw2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.576190 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b196cacb-2343-4909-8566-b77d46744231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f040b4b6ebc10974cb19509fa7b6101a6c51581050694632acc59dbed97df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0671047977ec4f147a65166f71b1f8407722f9d181314ed5945689c9079b17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fggn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxpgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.588393 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g6nd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f124a69a4f95d2c4d96ca6b9e0688a0301c08cfe32cffb613d7943057978efe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vpjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g6nd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.597997 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d7692f7-4101-4c41-86f0-d8c2883110bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdd5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.616695 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.616727 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.616738 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.616755 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.616766 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.617209 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af35aee1-fb3b-462f-a7a3-e663a7f1a7aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb1f73695e99b201ce3d33b7d51712637ea7aa38a14a267e29cad18890ef1b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee175c7ac5ad3a46279f497d29513424b7c6601c518649f63eb6b950c4f04013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f1be11db30f10cdfbe473b20446b445e0354170c4e02a8e2c8d73e4d32851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8badb9d461fbc970e520cbd92ddf854b86a1623943edeb0fe98256a5f14851e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85c84f61afc57a279dcd5e7bed6eb2519760762024ea2a71e2c6b5888af2864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321257b51c93964b8f6e59dacd2eb3d2bbdfe0857d0e891c99b3ec0cd584d0f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59af7903a168ae4f4b31385076abdfa005a144e56e86d32afffe28ab5f1d2837\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503e437e0ad198ab5af20c8333b0edd1c33ee061714fb81224c3d366be9ca9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.631042 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d450d44e-8219-4372-904c-6dfeb99953c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"file observer\\\\nW1213 22:17:05.442934 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 22:17:05.443156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 22:17:05.444268 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-965818911/tls.crt::/tmp/serving-cert-965818911/tls.key\\\\\\\"\\\\nI1213 22:17:06.201874 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 22:17:06.207737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 22:17:06.207764 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 22:17:06.207790 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 22:17:06.207795 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 22:17:06.221866 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 22:17:06.221906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 22:17:06.221918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 22:17:06.221921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 22:17:06.221925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 22:17:06.221928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 22:17:06.222239 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 22:17:06.225368 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T22:16:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T22:16:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T22:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:16:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.640111 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-265zz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c32dec92-1cb0-4596-b551-b35b25f09692\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92zd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T22:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-265zz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.651472 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.662300 4866 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T22:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T22:17:21Z is after 2025-08-24T17:21:41Z" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.707613 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.707590688 podStartE2EDuration="13.707590688s" podCreationTimestamp="2025-12-13 22:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:21.707412144 +0000 UTC m=+39.748750696" watchObservedRunningTime="2025-12-13 22:17:21.707590688 +0000 UTC m=+39.748929250" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.719488 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.719538 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.719551 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.719568 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.719580 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.759589 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podStartSLOduration=15.759570571 podStartE2EDuration="15.759570571s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:21.747878546 +0000 UTC m=+39.789217098" watchObservedRunningTime="2025-12-13 22:17:21.759570571 +0000 UTC m=+39.800909123" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.760672 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:21 crc kubenswrapper[4866]: E1213 22:17:21.760806 4866 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:21 crc kubenswrapper[4866]: E1213 22:17:21.760855 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs podName:1d7692f7-4101-4c41-86f0-d8c2883110bf nodeName:}" failed. No retries permitted until 2025-12-13 22:17:23.760842491 +0000 UTC m=+41.802181043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs") pod "network-metrics-daemon-sdd5b" (UID: "1d7692f7-4101-4c41-86f0-d8c2883110bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.789696 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-265zz" podStartSLOduration=16.789674759 podStartE2EDuration="16.789674759s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:21.760782019 +0000 UTC m=+39.802120571" watchObservedRunningTime="2025-12-13 22:17:21.789674759 +0000 UTC m=+39.831013311" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.823224 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.823256 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.823267 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.823283 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.823294 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.831563 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g6nd6" podStartSLOduration=15.831543654 podStartE2EDuration="15.831543654s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:21.790422546 +0000 UTC m=+39.831761098" watchObservedRunningTime="2025-12-13 22:17:21.831543654 +0000 UTC m=+39.872882206" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.877311 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.87729243 podStartE2EDuration="15.87729243s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:21.876233235 +0000 UTC m=+39.917571807" watchObservedRunningTime="2025-12-13 22:17:21.87729243 +0000 UTC m=+39.918630982" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.911337 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.91132011 podStartE2EDuration="15.91132011s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:21.910396219 +0000 UTC m=+39.951734781" watchObservedRunningTime="2025-12-13 22:17:21.91132011 +0000 UTC m=+39.952658662" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.924952 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.924988 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.924999 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.925015 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.925024 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:21Z","lastTransitionTime":"2025-12-13T22:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.962146 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:21 crc kubenswrapper[4866]: I1213 22:17:21.962273 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:21 crc kubenswrapper[4866]: E1213 22:17:21.962302 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.962283379 +0000 UTC m=+56.003621931 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:21 crc kubenswrapper[4866]: E1213 22:17:21.962365 4866 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:21 crc kubenswrapper[4866]: E1213 22:17:21.962409 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.962399142 +0000 UTC m=+56.003737694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.027161 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.027196 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.027208 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.027223 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.027232 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.063591 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.063854 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.064012 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.063757 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064171 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064288 4866 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064258 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064374 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.064357461 +0000 UTC m=+56.105696013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064395 4866 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064420 4866 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.063946 4866 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064458 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.064446943 +0000 UTC m=+56.105785495 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.064489 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.064466163 +0000 UTC m=+56.105804715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.069092 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxpgw" podStartSLOduration=16.069076922 podStartE2EDuration="16.069076922s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:22.068508148 +0000 UTC m=+40.109846700" watchObservedRunningTime="2025-12-13 22:17:22.069076922 +0000 UTC m=+40.110415474" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.129415 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.129456 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.129465 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.129481 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.129490 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.212508 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.212560 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.212706 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.212798 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.212872 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:22 crc kubenswrapper[4866]: E1213 22:17:22.212947 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.233776 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.234080 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.234210 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.234289 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.234370 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.336129 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.336161 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.336169 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.336182 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.336191 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.438692 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.438723 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.439387 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.439403 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.439415 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.531870 4866 generic.go:334] "Generic (PLEG): container finished" podID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" containerID="d772965c9d49a6ed971442a8976c910d728726e28e4e4af286b39423d985ddf2" exitCode=0 Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.532113 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerDied","Data":"d772965c9d49a6ed971442a8976c910d728726e28e4e4af286b39423d985ddf2"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.533701 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"db3d523dc9cbd32318fd60831c61f608805e5b107b808230864be476db66010d"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.542511 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.542548 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.542559 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.542573 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.542585 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.644396 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.644490 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.644502 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.644517 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.644529 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.747341 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.747367 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.747376 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.747390 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.747401 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.849400 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.849440 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.849452 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.849468 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.849478 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.952427 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.952457 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.952465 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.952478 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:22 crc kubenswrapper[4866]: I1213 22:17:22.952487 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:22Z","lastTransitionTime":"2025-12-13T22:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.054684 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.054735 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.054747 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.054760 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.054769 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.157551 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.157594 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.157604 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.157618 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.157630 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.212869 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:23 crc kubenswrapper[4866]: E1213 22:17:23.212969 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.259858 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.260034 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.260131 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.260235 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.260324 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.362716 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.362746 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.362758 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.362771 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.362780 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.464292 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.464322 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.464332 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.464345 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.464354 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.539188 4866 generic.go:334] "Generic (PLEG): container finished" podID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" containerID="b694d830edd004c83b7eef35432ad72336342ed2a655220a4fcd1b6446b0ed2b" exitCode=0 Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.539307 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerDied","Data":"b694d830edd004c83b7eef35432ad72336342ed2a655220a4fcd1b6446b0ed2b"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.549331 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.552748 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-htw2b" event={"ID":"cd122e00-4561-49cd-9477-2517a6094fb5","Type":"ContainerStarted","Data":"e4c566ae16183198d782d4b5c2b355bb6f001f1ae6a0f136d7156e5cfc45dc3c"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.567398 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.567459 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.567479 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.567512 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.567533 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.589667 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-htw2b" podStartSLOduration=18.589645571 podStartE2EDuration="18.589645571s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:23.588369301 +0000 UTC m=+41.629707883" watchObservedRunningTime="2025-12-13 22:17:23.589645571 +0000 UTC m=+41.630984123" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.671782 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.671820 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.671828 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.671842 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.671857 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.775342 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.775389 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.775398 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.775429 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.775440 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.782212 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:23 crc kubenswrapper[4866]: E1213 22:17:23.782320 4866 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:23 crc kubenswrapper[4866]: E1213 22:17:23.782372 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs podName:1d7692f7-4101-4c41-86f0-d8c2883110bf nodeName:}" failed. No retries permitted until 2025-12-13 22:17:27.782356494 +0000 UTC m=+45.823695036 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs") pod "network-metrics-daemon-sdd5b" (UID: "1d7692f7-4101-4c41-86f0-d8c2883110bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.877776 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.877823 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.877833 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.877848 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.877858 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.979371 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.979411 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.979419 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.979431 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:23 crc kubenswrapper[4866]: I1213 22:17:23.979441 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:23Z","lastTransitionTime":"2025-12-13T22:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.081567 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.081705 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.081715 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.081731 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.081739 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.184299 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.184339 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.184382 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.184402 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.184414 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.212358 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:24 crc kubenswrapper[4866]: E1213 22:17:24.212498 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.212920 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:24 crc kubenswrapper[4866]: E1213 22:17:24.212999 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.213158 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:24 crc kubenswrapper[4866]: E1213 22:17:24.213235 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.285971 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.286007 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.286018 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.286034 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.286066 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.388395 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.388434 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.388445 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.388491 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.388504 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.490318 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.490385 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.490397 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.490412 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.490424 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.559336 4866 generic.go:334] "Generic (PLEG): container finished" podID="9a704d7c-0ecc-4fb7-96d5-180353c3bf59" containerID="838746a5c663c7171f8fa5266acc172b53dadf68241d5d6202088a6509ca2489" exitCode=0 Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.559388 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerDied","Data":"838746a5c663c7171f8fa5266acc172b53dadf68241d5d6202088a6509ca2489"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.592527 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.592557 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.592567 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.592595 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.592605 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.694000 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.694026 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.694034 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.694071 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.694082 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.796183 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.796243 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.796252 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.796266 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.796275 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.899032 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.899079 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.899088 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.899101 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:24 crc kubenswrapper[4866]: I1213 22:17:24.899109 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:24Z","lastTransitionTime":"2025-12-13T22:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.000882 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.000911 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.000920 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.000934 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.000943 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.104465 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.104535 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.104547 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.104565 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.104602 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.207540 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.207563 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.207573 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.207586 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.207594 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.212162 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:25 crc kubenswrapper[4866]: E1213 22:17:25.212265 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.309070 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.309097 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.309106 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.309119 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.309129 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.396035 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.412184 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.412220 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.412249 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.412264 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.412276 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.516650 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.516737 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.516760 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.516788 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.516808 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.567636 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" event={"ID":"9a704d7c-0ecc-4fb7-96d5-180353c3bf59","Type":"ContainerStarted","Data":"a989eee5122dbc0feac240cae15f377165cf2f197d4b74f7f7a722c3841b96cd"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.607668 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mh6xz" podStartSLOduration=19.607639973 podStartE2EDuration="19.607639973s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:25.607548681 +0000 UTC m=+43.648887233" watchObservedRunningTime="2025-12-13 22:17:25.607639973 +0000 UTC m=+43.648978525" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.619674 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.619723 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.619733 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.619751 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.619763 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.722774 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.722831 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.722847 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.722873 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.722889 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.825374 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.825410 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.825419 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.825434 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.825443 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.928951 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.928993 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.929004 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.929022 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:25 crc kubenswrapper[4866]: I1213 22:17:25.929034 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:25Z","lastTransitionTime":"2025-12-13T22:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.031726 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.031764 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.031780 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.031796 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.031806 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.133746 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.133777 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.133785 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.133798 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.133806 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.213143 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.213182 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.213182 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:26 crc kubenswrapper[4866]: E1213 22:17:26.213270 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:26 crc kubenswrapper[4866]: E1213 22:17:26.213418 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:26 crc kubenswrapper[4866]: E1213 22:17:26.213503 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.236065 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.236110 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.236120 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.236136 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.236148 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.338409 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.338451 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.338490 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.338507 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.338517 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.440562 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.440601 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.440615 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.440633 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.440643 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.542926 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.542961 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.542971 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.542985 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.542996 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.580633 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerStarted","Data":"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.581397 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.581466 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.581495 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.606320 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.611875 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.645881 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podStartSLOduration=20.645862977 podStartE2EDuration="20.645862977s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:26.615703257 +0000 UTC m=+44.657041809" watchObservedRunningTime="2025-12-13 22:17:26.645862977 +0000 UTC m=+44.687201529" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.646118 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.646342 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.646362 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.646385 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.646401 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.749031 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.749083 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.749096 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.749110 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.749121 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.851307 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.851331 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.851339 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.851351 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.851359 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.953342 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.953374 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.953382 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.953395 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:26 crc kubenswrapper[4866]: I1213 22:17:26.953404 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:26Z","lastTransitionTime":"2025-12-13T22:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.055090 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.055126 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.055137 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.055158 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.055169 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:27Z","lastTransitionTime":"2025-12-13T22:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.083234 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.083270 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.083279 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.083293 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.083302 4866 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T22:17:27Z","lastTransitionTime":"2025-12-13T22:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.123240 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf"] Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.123548 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.125593 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.125739 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.126059 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.126915 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.212669 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:27 crc kubenswrapper[4866]: E1213 22:17:27.212979 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.213317 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c0966c8-4b28-4cfc-9785-4bef756a3a60-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.213375 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0966c8-4b28-4cfc-9785-4bef756a3a60-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.213406 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8c0966c8-4b28-4cfc-9785-4bef756a3a60-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.213474 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8c0966c8-4b28-4cfc-9785-4bef756a3a60-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.213504 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c0966c8-4b28-4cfc-9785-4bef756a3a60-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.314879 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8c0966c8-4b28-4cfc-9785-4bef756a3a60-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.314936 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c0966c8-4b28-4cfc-9785-4bef756a3a60-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.314967 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c0966c8-4b28-4cfc-9785-4bef756a3a60-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.314999 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0966c8-4b28-4cfc-9785-4bef756a3a60-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.315020 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8c0966c8-4b28-4cfc-9785-4bef756a3a60-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.315062 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8c0966c8-4b28-4cfc-9785-4bef756a3a60-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.315135 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8c0966c8-4b28-4cfc-9785-4bef756a3a60-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.315961 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c0966c8-4b28-4cfc-9785-4bef756a3a60-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.319356 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0966c8-4b28-4cfc-9785-4bef756a3a60-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.333614 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c0966c8-4b28-4cfc-9785-4bef756a3a60-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rd6cf\" (UID: \"8c0966c8-4b28-4cfc-9785-4bef756a3a60\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.438217 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.584221 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" event={"ID":"8c0966c8-4b28-4cfc-9785-4bef756a3a60","Type":"ContainerStarted","Data":"459a41a2a306979eb49760ca310a830b18d1b15993c986916699fbba3da353fa"} Dec 13 22:17:27 crc kubenswrapper[4866]: I1213 22:17:27.819831 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:27 crc kubenswrapper[4866]: E1213 22:17:27.819987 4866 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:27 crc kubenswrapper[4866]: E1213 22:17:27.820149 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs podName:1d7692f7-4101-4c41-86f0-d8c2883110bf nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.82013269 +0000 UTC m=+53.861471242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs") pod "network-metrics-daemon-sdd5b" (UID: "1d7692f7-4101-4c41-86f0-d8c2883110bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.212687 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:28 crc kubenswrapper[4866]: E1213 22:17:28.212977 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.212790 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:28 crc kubenswrapper[4866]: E1213 22:17:28.213072 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.213132 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:28 crc kubenswrapper[4866]: E1213 22:17:28.213177 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.527233 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sdd5b"] Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.587388 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:28 crc kubenswrapper[4866]: E1213 22:17:28.587504 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.587714 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" event={"ID":"8c0966c8-4b28-4cfc-9785-4bef756a3a60","Type":"ContainerStarted","Data":"e4099f44ac18614b9f7b1dfe7b97ab806172ddcbb757ca9213ef89a08a37f943"} Dec 13 22:17:28 crc kubenswrapper[4866]: I1213 22:17:28.602461 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rd6cf" podStartSLOduration=22.602442143 podStartE2EDuration="22.602442143s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:28.601968442 +0000 UTC m=+46.643306994" watchObservedRunningTime="2025-12-13 22:17:28.602442143 +0000 UTC m=+46.643780695" Dec 13 22:17:29 crc kubenswrapper[4866]: I1213 22:17:29.212456 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:29 crc kubenswrapper[4866]: E1213 22:17:29.212550 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:30 crc kubenswrapper[4866]: I1213 22:17:30.212953 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:30 crc kubenswrapper[4866]: I1213 22:17:30.212995 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:30 crc kubenswrapper[4866]: I1213 22:17:30.213043 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:30 crc kubenswrapper[4866]: E1213 22:17:30.213095 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdd5b" podUID="1d7692f7-4101-4c41-86f0-d8c2883110bf" Dec 13 22:17:30 crc kubenswrapper[4866]: E1213 22:17:30.213235 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 22:17:30 crc kubenswrapper[4866]: E1213 22:17:30.213303 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.212694 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:31 crc kubenswrapper[4866]: E1213 22:17:31.212823 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.665798 4866 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.665951 4866 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.712139 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ppg68"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.712610 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.714067 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.714675 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.717497 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s54hq"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.717884 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: W1213 22:17:31.720756 4866 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 13 22:17:31 crc kubenswrapper[4866]: E1213 22:17:31.720953 4866 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.720769 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.726792 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.727000 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.727737 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fjf67"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.728145 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.728456 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.729776 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.730464 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.730775 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.731415 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.731462 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s4jkf"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.737492 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.741013 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.741188 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.756017 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.756147 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.756281 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.756509 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.756660 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.757292 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.757783 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.758237 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2688"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.758561 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.758983 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zp55r"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.759382 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.759656 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.761365 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.761635 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.761898 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.762258 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.762410 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.762507 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.762619 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.762434 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.763142 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.762460 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.763795 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.764069 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.764331 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.764762 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tlbgv"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765038 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765228 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6svzb"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765283 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765519 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765561 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-images\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765585 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b47c24d-6353-449a-b61c-60672ef29fbd-config\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765607 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765629 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvwhb\" (UniqueName: \"kubernetes.io/projected/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-kube-api-access-qvwhb\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765652 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765682 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b47c24d-6353-449a-b61c-60672ef29fbd-trusted-ca\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765703 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e28583-3224-4f65-a4c6-c1aee16deda8-serving-cert\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765723 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765751 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b601d73e-8fa8-4ece-8141-6a289f058547-config\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765776 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b601d73e-8fa8-4ece-8141-6a289f058547-machine-approver-tls\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765798 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765817 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765831 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a10e286-d082-421b-b0c0-a17de2547023-serving-cert\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765879 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765901 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765923 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765946 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvrq\" (UniqueName: \"kubernetes.io/projected/5a10e286-d082-421b-b0c0-a17de2547023-kube-api-access-tvvrq\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765969 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.765990 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-config\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766007 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b47c24d-6353-449a-b61c-60672ef29fbd-serving-cert\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766028 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766076 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdfz\" (UniqueName: \"kubernetes.io/projected/9b47c24d-6353-449a-b61c-60672ef29fbd-kube-api-access-hjdfz\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766098 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-dir\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766118 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58c7\" (UniqueName: \"kubernetes.io/projected/144e732e-78b7-4e31-8f30-ed505c2ae0e9-kube-api-access-b58c7\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766138 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b601d73e-8fa8-4ece-8141-6a289f058547-auth-proxy-config\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766158 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-policies\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766181 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766203 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq28b\" (UniqueName: \"kubernetes.io/projected/73e28583-3224-4f65-a4c6-c1aee16deda8-kube-api-access-kq28b\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766235 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-config\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766256 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766275 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-service-ca-bundle\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766294 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-config\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766315 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766336 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htp5b\" (UniqueName: \"kubernetes.io/projected/b601d73e-8fa8-4ece-8141-6a289f058547-kube-api-access-htp5b\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.766357 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-client-ca\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.770167 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.770335 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.770444 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.770531 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.770619 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.771367 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.771454 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.771903 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.783205 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.783359 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wzf86"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.783711 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.783979 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-brrt8"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.784310 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.784374 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.784641 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.784810 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.784851 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.785298 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.785549 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.785740 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.785924 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.786140 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.789379 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.789597 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.790696 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.790823 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.790934 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.791037 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.791156 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.791247 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.791337 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.792177 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.797742 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.797962 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.798086 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.812803 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wbx6k"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.813248 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.813566 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.813682 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.813954 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.814343 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.815986 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.816953 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.818102 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.818278 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.819029 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.819614 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.820378 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.820973 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.820998 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.832187 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.833785 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.852079 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.852607 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.852762 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.854129 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.854264 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.854378 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.854490 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.854625 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.855220 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.855496 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.855675 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.855901 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.856196 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.858234 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.858361 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.858771 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.858875 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.863819 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.863879 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.864475 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.864670 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.864929 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.865439 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.867782 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.868283 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.868643 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.869104 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872900 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872931 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b705b1-0852-447c-9bed-e23342613e1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872966 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b601d73e-8fa8-4ece-8141-6a289f058547-config\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872991 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-config\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873007 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-metrics-certs\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873025 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b601d73e-8fa8-4ece-8141-6a289f058547-machine-approver-tls\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873041 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873076 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-client-ca\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873095 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-audit-policies\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873111 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2401c59-016a-4806-ac04-bbea98467a21-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873134 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a10e286-d082-421b-b0c0-a17de2547023-serving-cert\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873155 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f70cb47b-ccf8-4be2-8e8d-9be364a22205-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873175 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7v2j\" (UniqueName: \"kubernetes.io/projected/b24e9c98-6286-4e1c-82ff-7048e879d889-kube-api-access-c7v2j\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873190 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873205 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-encryption-config\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873222 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2401c59-016a-4806-ac04-bbea98467a21-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872914 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873574 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872978 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873804 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75g6q\" (UniqueName: \"kubernetes.io/projected/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-kube-api-access-75g6q\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873917 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.873942 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874012 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874038 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b705b1-0852-447c-9bed-e23342613e1f-config\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874072 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn27t\" (UniqueName: \"kubernetes.io/projected/be954a83-6cf4-4f06-9de5-0540e967cfe9-kube-api-access-nn27t\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874098 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhgl\" (UniqueName: \"kubernetes.io/projected/336a2b00-9af4-4f90-88a9-7920886c82ca-kube-api-access-7rhgl\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874118 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874135 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-oauth-serving-cert\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.872572 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874150 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-etcd-client\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874214 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvrq\" (UniqueName: \"kubernetes.io/projected/5a10e286-d082-421b-b0c0-a17de2547023-kube-api-access-tvvrq\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874237 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874274 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-config\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874290 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b47c24d-6353-449a-b61c-60672ef29fbd-serving-cert\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874305 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874322 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874342 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdfz\" (UniqueName: \"kubernetes.io/projected/9b47c24d-6353-449a-b61c-60672ef29fbd-kube-api-access-hjdfz\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874358 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-dir\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874374 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58c7\" (UniqueName: \"kubernetes.io/projected/144e732e-78b7-4e31-8f30-ed505c2ae0e9-kube-api-access-b58c7\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.874393 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjsjx\" (UniqueName: \"kubernetes.io/projected/f70cb47b-ccf8-4be2-8e8d-9be364a22205-kube-api-access-jjsjx\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.875295 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wj4kw"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.875964 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878766 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c89xx\" (UniqueName: \"kubernetes.io/projected/b5141b6f-cd05-4499-ae0d-cdd14f3f5a61-kube-api-access-c89xx\") pod \"downloads-7954f5f757-tlbgv\" (UID: \"b5141b6f-cd05-4499-ae0d-cdd14f3f5a61\") " pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878788 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-dir\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878809 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b601d73e-8fa8-4ece-8141-6a289f058547-auth-proxy-config\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878828 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-serving-cert\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878850 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-policies\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878893 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878910 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f70cb47b-ccf8-4be2-8e8d-9be364a22205-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878927 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-oauth-config\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878962 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-config\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.878982 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq28b\" (UniqueName: \"kubernetes.io/projected/73e28583-3224-4f65-a4c6-c1aee16deda8-kube-api-access-kq28b\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.879321 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.879949 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-config\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.880742 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.881501 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.881666 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.881747 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.882221 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.882709 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.882865 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.884657 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.884821 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.885805 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.887874 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.901982 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dmd7t"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.903008 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.903885 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b601d73e-8fa8-4ece-8141-6a289f058547-config\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.905618 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b47c24d-6353-449a-b61c-60672ef29fbd-serving-cert\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.906037 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a10e286-d082-421b-b0c0-a17de2547023-serving-cert\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.908823 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.908965 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b601d73e-8fa8-4ece-8141-6a289f058547-auth-proxy-config\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.910487 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b601d73e-8fa8-4ece-8141-6a289f058547-machine-approver-tls\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.912215 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-policies\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.912466 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-config\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.912535 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.913467 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-service-ca-bundle\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.913546 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-config\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.913611 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5p6\" (UniqueName: \"kubernetes.io/projected/e2401c59-016a-4806-ac04-bbea98467a21-kube-api-access-mv5p6\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.914261 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.914504 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915081 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915348 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-service-ca-bundle\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915385 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be954a83-6cf4-4f06-9de5-0540e967cfe9-serving-cert\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915402 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-config\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915459 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htp5b\" (UniqueName: \"kubernetes.io/projected/b601d73e-8fa8-4ece-8141-6a289f058547-kube-api-access-htp5b\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915624 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-client-ca\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915668 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0771c5ba-de65-4932-a37e-b21a2337f265-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vvnn7\" (UID: \"0771c5ba-de65-4932-a37e-b21a2337f265\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915680 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a10e286-d082-421b-b0c0-a17de2547023-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.915871 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.916560 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.916743 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-client-ca\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.916778 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b705b1-0852-447c-9bed-e23342613e1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.916915 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917130 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-images\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917283 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b47c24d-6353-449a-b61c-60672ef29fbd-config\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917341 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917368 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917393 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-serving-cert\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917446 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/336a2b00-9af4-4f90-88a9-7920886c82ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917502 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvwhb\" (UniqueName: \"kubernetes.io/projected/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-kube-api-access-qvwhb\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917525 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvbwg\" (UniqueName: \"kubernetes.io/projected/f12ed525-d75a-402f-b6c5-ce6298cb98f1-kube-api-access-rvbwg\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917548 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-service-ca\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917571 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-default-certificate\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917626 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917649 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917742 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b24e9c98-6286-4e1c-82ff-7048e879d889-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917746 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-images\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917771 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-trusted-ca-bundle\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917805 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b47c24d-6353-449a-b61c-60672ef29fbd-trusted-ca\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.917958 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-config\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918002 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24e9c98-6286-4e1c-82ff-7048e879d889-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918023 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f12ed525-d75a-402f-b6c5-ce6298cb98f1-service-ca-bundle\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918041 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-stats-auth\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918243 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b47c24d-6353-449a-b61c-60672ef29fbd-config\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918687 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e28583-3224-4f65-a4c6-c1aee16deda8-serving-cert\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918736 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.918950 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b47c24d-6353-449a-b61c-60672ef29fbd-trusted-ca\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.919687 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.919819 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.920522 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f70cb47b-ccf8-4be2-8e8d-9be364a22205-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.920684 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-audit-dir\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.920814 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svrf6\" (UniqueName: \"kubernetes.io/projected/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-kube-api-access-svrf6\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.920896 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.920958 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h557w\" (UniqueName: \"kubernetes.io/projected/0771c5ba-de65-4932-a37e-b21a2337f265-kube-api-access-h557w\") pod \"cluster-samples-operator-665b6dd947-vvnn7\" (UID: \"0771c5ba-de65-4932-a37e-b21a2337f265\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.920979 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/336a2b00-9af4-4f90-88a9-7920886c82ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.924487 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.926325 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.938431 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.938643 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.972421 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e28583-3224-4f65-a4c6-c1aee16deda8-serving-cert\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.973260 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.973445 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.973512 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.974390 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.975093 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.975145 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.975710 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.977095 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-72w7j"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.977683 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.977817 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.978066 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.978314 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.984080 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h29td"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.984720 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.985454 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g9k2k"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.986251 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.986579 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.987144 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.987958 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.988747 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.992093 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c64bv"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.992611 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.992780 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.993680 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.994193 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5247b"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.994714 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.994982 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc"] Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.995666 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:31 crc kubenswrapper[4866]: I1213 22:17:31.998709 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.001433 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.001712 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ppg68"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.001729 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dc7kh"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.002077 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.002140 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.002500 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.002619 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.004112 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fjf67"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.004334 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.021278 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s54hq"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025339 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025358 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6svzb"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025370 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s4jkf"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.024251 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0771c5ba-de65-4932-a37e-b21a2337f265-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vvnn7\" (UID: \"0771c5ba-de65-4932-a37e-b21a2337f265\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025441 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b705b1-0852-447c-9bed-e23342613e1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025482 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-serving-cert\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025508 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/336a2b00-9af4-4f90-88a9-7920886c82ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025541 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvbwg\" (UniqueName: \"kubernetes.io/projected/f12ed525-d75a-402f-b6c5-ce6298cb98f1-kube-api-access-rvbwg\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025564 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-service-ca\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025593 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-default-certificate\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025624 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025645 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b24e9c98-6286-4e1c-82ff-7048e879d889-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025672 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-trusted-ca-bundle\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025707 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-config\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025727 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24e9c98-6286-4e1c-82ff-7048e879d889-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025751 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f12ed525-d75a-402f-b6c5-ce6298cb98f1-service-ca-bundle\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025773 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-stats-auth\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025817 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f70cb47b-ccf8-4be2-8e8d-9be364a22205-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025859 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-audit-dir\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025881 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svrf6\" (UniqueName: \"kubernetes.io/projected/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-kube-api-access-svrf6\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025904 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h557w\" (UniqueName: \"kubernetes.io/projected/0771c5ba-de65-4932-a37e-b21a2337f265-kube-api-access-h557w\") pod \"cluster-samples-operator-665b6dd947-vvnn7\" (UID: \"0771c5ba-de65-4932-a37e-b21a2337f265\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025926 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/336a2b00-9af4-4f90-88a9-7920886c82ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025951 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b705b1-0852-447c-9bed-e23342613e1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.025989 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-config\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026010 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-metrics-certs\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026034 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-client-ca\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026090 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-audit-policies\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026119 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2401c59-016a-4806-ac04-bbea98467a21-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026158 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f70cb47b-ccf8-4be2-8e8d-9be364a22205-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026182 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7v2j\" (UniqueName: \"kubernetes.io/projected/b24e9c98-6286-4e1c-82ff-7048e879d889-kube-api-access-c7v2j\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026208 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026230 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-encryption-config\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026251 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2401c59-016a-4806-ac04-bbea98467a21-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026273 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75g6q\" (UniqueName: \"kubernetes.io/projected/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-kube-api-access-75g6q\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026300 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b705b1-0852-447c-9bed-e23342613e1f-config\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026324 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn27t\" (UniqueName: \"kubernetes.io/projected/be954a83-6cf4-4f06-9de5-0540e967cfe9-kube-api-access-nn27t\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026345 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhgl\" (UniqueName: \"kubernetes.io/projected/336a2b00-9af4-4f90-88a9-7920886c82ca-kube-api-access-7rhgl\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026372 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-oauth-serving-cert\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026391 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-etcd-client\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026423 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026463 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjsjx\" (UniqueName: \"kubernetes.io/projected/f70cb47b-ccf8-4be2-8e8d-9be364a22205-kube-api-access-jjsjx\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026483 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c89xx\" (UniqueName: \"kubernetes.io/projected/b5141b6f-cd05-4499-ae0d-cdd14f3f5a61-kube-api-access-c89xx\") pod \"downloads-7954f5f757-tlbgv\" (UID: \"b5141b6f-cd05-4499-ae0d-cdd14f3f5a61\") " pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026505 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-serving-cert\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026532 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f70cb47b-ccf8-4be2-8e8d-9be364a22205-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026553 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-oauth-config\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026598 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5p6\" (UniqueName: \"kubernetes.io/projected/e2401c59-016a-4806-ac04-bbea98467a21-kube-api-access-mv5p6\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.026621 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be954a83-6cf4-4f06-9de5-0540e967cfe9-serving-cert\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.027141 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.027183 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tlbgv"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.027193 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kg5zb"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.027674 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.030364 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-client-ca\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.030809 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-config\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.031561 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-audit-policies\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.031911 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zp55r"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.031938 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zgx58"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.032800 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c5wt4"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.033269 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.033593 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-serving-cert\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.033689 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.035766 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-oauth-config\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.036097 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-oauth-serving-cert\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.036396 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-serving-cert\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.037165 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.037207 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.037217 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.037971 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-etcd-client\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.038156 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-encryption-config\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.038576 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.038823 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.038926 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-audit-dir\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.039624 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2401c59-016a-4806-ac04-bbea98467a21-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.040640 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24e9c98-6286-4e1c-82ff-7048e879d889-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.040953 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f70cb47b-ccf8-4be2-8e8d-9be364a22205-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.041403 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f12ed525-d75a-402f-b6c5-ce6298cb98f1-service-ca-bundle\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.041761 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wj4kw"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.041792 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.041803 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.042017 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.042428 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-default-certificate\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.042541 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0771c5ba-de65-4932-a37e-b21a2337f265-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vvnn7\" (UID: \"0771c5ba-de65-4932-a37e-b21a2337f265\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.042884 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2401c59-016a-4806-ac04-bbea98467a21-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.043113 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be954a83-6cf4-4f06-9de5-0540e967cfe9-serving-cert\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.043644 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b705b1-0852-447c-9bed-e23342613e1f-config\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.043860 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/336a2b00-9af4-4f90-88a9-7920886c82ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.043927 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2688"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.044573 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-stats-auth\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.045414 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-console-config\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.045514 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-trusted-ca-bundle\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.045923 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.046515 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b24e9c98-6286-4e1c-82ff-7048e879d889-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.052137 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-brrt8"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.055693 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.057109 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-service-ca\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.058923 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b705b1-0852-447c-9bed-e23342613e1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.059655 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.061263 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/336a2b00-9af4-4f90-88a9-7920886c82ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.062605 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wzf86"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.065564 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.066030 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f12ed525-d75a-402f-b6c5-ce6298cb98f1-metrics-certs\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.066942 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dmd7t"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.068377 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.069561 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.070679 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.071625 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.072534 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.073528 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c64bv"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.074438 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.074567 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-72w7j"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.075551 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5247b"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.076574 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h29td"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.077546 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.078665 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zgx58"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.079626 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-57fcq"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.080356 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.080613 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g9k2k"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.081849 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.082904 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.083957 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.085082 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-57fcq"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.086245 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c5wt4"] Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.094541 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.114334 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.134332 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.154110 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.174204 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.194782 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.212164 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.212171 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.212300 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.214827 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.249301 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdfz\" (UniqueName: \"kubernetes.io/projected/9b47c24d-6353-449a-b61c-60672ef29fbd-kube-api-access-hjdfz\") pod \"console-operator-58897d9998-s4jkf\" (UID: \"9b47c24d-6353-449a-b61c-60672ef29fbd\") " pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.267195 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58c7\" (UniqueName: \"kubernetes.io/projected/144e732e-78b7-4e31-8f30-ed505c2ae0e9-kube-api-access-b58c7\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.279578 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.338313 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.338484 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvrq\" (UniqueName: \"kubernetes.io/projected/5a10e286-d082-421b-b0c0-a17de2547023-kube-api-access-tvvrq\") pod \"authentication-operator-69f744f599-s54hq\" (UID: \"5a10e286-d082-421b-b0c0-a17de2547023\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.355292 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.375098 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.390518 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.394701 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.415120 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.435106 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.454832 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.474813 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.479891 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.494877 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.514416 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.534577 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.554341 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.574608 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.593999 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.614457 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.633749 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.654173 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.689632 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq28b\" (UniqueName: \"kubernetes.io/projected/73e28583-3224-4f65-a4c6-c1aee16deda8-kube-api-access-kq28b\") pod \"route-controller-manager-6576b87f9c-69pzm\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.706005 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htp5b\" (UniqueName: \"kubernetes.io/projected/b601d73e-8fa8-4ece-8141-6a289f058547-kube-api-access-htp5b\") pod \"machine-approver-56656f9798-l4vcs\" (UID: \"b601d73e-8fa8-4ece-8141-6a289f058547\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.728724 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvwhb\" (UniqueName: \"kubernetes.io/projected/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-kube-api-access-qvwhb\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.734333 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.754256 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.774945 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.800411 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.815076 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.834811 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.854403 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.873774 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.874653 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" Dec 13 22:17:32 crc kubenswrapper[4866]: W1213 22:17:32.884267 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb601d73e_8fa8_4ece_8141_6a289f058547.slice/crio-2a9f3a6fd20d82c5f6cb1365ee1e7594bcf36c819dff355647bfdc65af38fcf2 WatchSource:0}: Error finding container 2a9f3a6fd20d82c5f6cb1365ee1e7594bcf36c819dff355647bfdc65af38fcf2: Status 404 returned error can't find the container with id 2a9f3a6fd20d82c5f6cb1365ee1e7594bcf36c819dff355647bfdc65af38fcf2 Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.893968 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.916289 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 22:17:32 crc kubenswrapper[4866]: E1213 22:17:32.917757 4866 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 13 22:17:32 crc kubenswrapper[4866]: E1213 22:17:32.917841 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-machine-api-operator-tls podName:6b8290a2-1abc-4343-8aa9-27a3f16f64f7 nodeName:}" failed. No retries permitted until 2025-12-13 22:17:33.417795758 +0000 UTC m=+51.459134310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-ppg68" (UID: "6b8290a2-1abc-4343-8aa9-27a3f16f64f7") : failed to sync secret cache: timed out waiting for the condition Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.937338 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.959804 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.966933 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.975498 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.993353 4866 request.go:700] Waited for 1.015375536s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&limit=500&resourceVersion=0 Dec 13 22:17:32 crc kubenswrapper[4866]: I1213 22:17:32.994921 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.014880 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.035543 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.054346 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.074673 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.093993 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.114124 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.134775 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.154723 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.174749 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.194597 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.212147 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.214408 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.234255 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.254345 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.274647 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.294711 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.296979 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.314199 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.335818 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.354671 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.374874 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.395266 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.400092 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fjf67\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.401389 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f70cb47b-ccf8-4be2-8e8d-9be364a22205-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.415546 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.435659 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.454110 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.466991 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.475744 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.495529 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.514443 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.535245 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.554630 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.578092 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.594881 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.602168 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" event={"ID":"b601d73e-8fa8-4ece-8141-6a289f058547","Type":"ContainerStarted","Data":"2a9f3a6fd20d82c5f6cb1365ee1e7594bcf36c819dff355647bfdc65af38fcf2"} Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.614759 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.624043 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.634335 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.640419 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s4jkf"] Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.654711 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.660042 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s54hq"] Dec 13 22:17:33 crc kubenswrapper[4866]: W1213 22:17:33.668230 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b47c24d_6353_449a_b61c_60672ef29fbd.slice/crio-c855c98bc13d6734a9eaf7ce81a7866667a9daad4ebebec93f25849676100e50 WatchSource:0}: Error finding container c855c98bc13d6734a9eaf7ce81a7866667a9daad4ebebec93f25849676100e50: Status 404 returned error can't find the container with id c855c98bc13d6734a9eaf7ce81a7866667a9daad4ebebec93f25849676100e50 Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.674474 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.717297 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.768881 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn27t\" (UniqueName: \"kubernetes.io/projected/be954a83-6cf4-4f06-9de5-0540e967cfe9-kube-api-access-nn27t\") pod \"controller-manager-879f6c89f-b2688\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.772243 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjsjx\" (UniqueName: \"kubernetes.io/projected/f70cb47b-ccf8-4be2-8e8d-9be364a22205-kube-api-access-jjsjx\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777258 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6fe29c2e-47b8-434d-a38f-8edd2992e345-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777303 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777326 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-trusted-ca\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777463 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-tls\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777498 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-bound-sa-token\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777518 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpwh\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-kube-api-access-5dpwh\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777544 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2xv\" (UniqueName: \"kubernetes.io/projected/a82c92c0-47dc-4f29-8aa0-304a9f34f728-kube-api-access-dv2xv\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777651 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6fe29c2e-47b8-434d-a38f-8edd2992e345-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777673 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777731 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-certificates\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.777763 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: E1213 22:17:33.778091 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.278074905 +0000 UTC m=+52.319413457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.789441 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhgl\" (UniqueName: \"kubernetes.io/projected/336a2b00-9af4-4f90-88a9-7920886c82ca-kube-api-access-7rhgl\") pod \"openshift-config-operator-7777fb866f-6svzb\" (UID: \"336a2b00-9af4-4f90-88a9-7920886c82ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.802522 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm"] Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.810809 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c89xx\" (UniqueName: \"kubernetes.io/projected/b5141b6f-cd05-4499-ae0d-cdd14f3f5a61-kube-api-access-c89xx\") pod \"downloads-7954f5f757-tlbgv\" (UID: \"b5141b6f-cd05-4499-ae0d-cdd14f3f5a61\") " pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:17:33 crc kubenswrapper[4866]: W1213 22:17:33.814600 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e28583_3224_4f65_a4c6_c1aee16deda8.slice/crio-95a1540e964c2fd6941dccf15bbb9eba16c66117052d07b800a2f832e9126ff0 WatchSource:0}: Error finding container 95a1540e964c2fd6941dccf15bbb9eba16c66117052d07b800a2f832e9126ff0: Status 404 returned error can't find the container with id 95a1540e964c2fd6941dccf15bbb9eba16c66117052d07b800a2f832e9126ff0 Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.827556 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b705b1-0852-447c-9bed-e23342613e1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vnj9d\" (UID: \"c2b705b1-0852-447c-9bed-e23342613e1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.834725 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.838956 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.852118 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fjf67"] Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.867452 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5p6\" (UniqueName: \"kubernetes.io/projected/e2401c59-016a-4806-ac04-bbea98467a21-kube-api-access-mv5p6\") pod \"openshift-controller-manager-operator-756b6f6bc6-ft9zs\" (UID: \"e2401c59-016a-4806-ac04-bbea98467a21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.873442 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.874698 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.878567 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:33 crc kubenswrapper[4866]: E1213 22:17:33.878724 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.378700062 +0000 UTC m=+52.420038624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.878947 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcj9\" (UniqueName: \"kubernetes.io/projected/345bf766-2671-46d4-82f0-cb0838949d52-kube-api-access-kdcj9\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879065 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879153 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107bca77-52a4-442c-ae09-ee527f46a9c5-cert\") pod \"ingress-canary-57fcq\" (UID: \"107bca77-52a4-442c-ae09-ee527f46a9c5\") " pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879260 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-trusted-ca\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879354 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r85zh\" (UID: \"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879429 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37655499-3f2b-496a-a47c-cbf31b9c19ef-proxy-tls\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879495 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c963c4-def0-4384-a3bc-856b4cb6ed27-config\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879568 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-etcd-serving-ca\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879634 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-socket-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879714 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33af49a4-4694-4708-9930-d445c917d6a9-srv-cert\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879788 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8kk\" (UniqueName: \"kubernetes.io/projected/e7448d7f-ba89-4749-9cf2-60e55cffd82b-kube-api-access-wp8kk\") pod \"control-plane-machine-set-operator-78cbb6b69f-vt92h\" (UID: \"e7448d7f-ba89-4749-9cf2-60e55cffd82b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879865 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47q6\" (UniqueName: \"kubernetes.io/projected/94716d23-2314-47b4-8159-f1f2f970c989-kube-api-access-w47q6\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.879934 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5vk\" (UniqueName: \"kubernetes.io/projected/8eda9f75-a124-4cf4-ad0a-358a111fb147-kube-api-access-kj5vk\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880010 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-tls\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880162 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-bound-sa-token\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880243 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpwh\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-kube-api-access-5dpwh\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880314 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gck\" (UniqueName: \"kubernetes.io/projected/e110a18a-ce3e-4c58-b80d-9f8c9018a868-kube-api-access-z2gck\") pod \"multus-admission-controller-857f4d67dd-5247b\" (UID: \"e110a18a-ce3e-4c58-b80d-9f8c9018a868\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880391 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-csi-data-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880257 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-trusted-ca\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880547 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2404-0acf-4a0a-a581-b5c767351742-ready\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880630 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-serving-cert\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880704 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmjj4\" (UniqueName: \"kubernetes.io/projected/9055b773-45d4-411e-a3cf-f160e940b102-kube-api-access-hmjj4\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880785 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzlc\" (UniqueName: \"kubernetes.io/projected/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-kube-api-access-7zzlc\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880859 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczfq\" (UniqueName: \"kubernetes.io/projected/33af49a4-4694-4708-9930-d445c917d6a9-kube-api-access-fczfq\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880934 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06c68d45-b14f-447c-a680-c68502125d31-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881002 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbqxt\" (UniqueName: \"kubernetes.io/projected/538f1baf-acb2-4a08-9baf-2d710def7477-kube-api-access-rbqxt\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881115 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6bt\" (UniqueName: \"kubernetes.io/projected/a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2-kube-api-access-sn6bt\") pod \"package-server-manager-789f6589d5-r85zh\" (UID: \"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881201 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-serving-cert\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881311 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/345bf766-2671-46d4-82f0-cb0838949d52-srv-cert\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.880944 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881395 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-image-import-ca\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881488 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-config\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881573 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eda9f75-a124-4cf4-ad0a-358a111fb147-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881676 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881764 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06c68d45-b14f-447c-a680-c68502125d31-proxy-tls\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881892 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37655499-3f2b-496a-a47c-cbf31b9c19ef-images\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881949 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tcm8\" (UniqueName: \"kubernetes.io/projected/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-kube-api-access-5tcm8\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:33 crc kubenswrapper[4866]: E1213 22:17:33.881960 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.381946229 +0000 UTC m=+52.423284781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.881990 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-encryption-config\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882017 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882066 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnv6x\" (UniqueName: \"kubernetes.io/projected/d8283a63-3511-4508-ae32-3e5ec3488c19-kube-api-access-lnv6x\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882083 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpdl9\" (UniqueName: \"kubernetes.io/projected/f607d834-3c1c-45e8-8208-19ed5aca7e95-kube-api-access-hpdl9\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882100 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c963c4-def0-4384-a3bc-856b4cb6ed27-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882124 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6fe29c2e-47b8-434d-a38f-8edd2992e345-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882153 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8283a63-3511-4508-ae32-3e5ec3488c19-metrics-tls\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882174 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f607d834-3c1c-45e8-8208-19ed5aca7e95-metrics-tls\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882189 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f607d834-3c1c-45e8-8208-19ed5aca7e95-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882225 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae751643-7d61-4b5a-9cc6-d3507a958aee-serving-cert\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882241 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-registration-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882288 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/538f1baf-acb2-4a08-9baf-2d710def7477-node-pullsecrets\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882326 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f607d834-3c1c-45e8-8208-19ed5aca7e95-trusted-ca\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882374 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37655499-3f2b-496a-a47c-cbf31b9c19ef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882417 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33af49a4-4694-4708-9930-d445c917d6a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882505 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb8f2404-0acf-4a0a-a581-b5c767351742-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882705 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnswr\" (UniqueName: \"kubernetes.io/projected/87bf1879-724a-4a9e-b4c8-d5de32457782-kube-api-access-gnswr\") pod \"migrator-59844c95c7-9nvcf\" (UID: \"87bf1879-724a-4a9e-b4c8-d5de32457782\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882763 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2xv\" (UniqueName: \"kubernetes.io/projected/a82c92c0-47dc-4f29-8aa0-304a9f34f728-kube-api-access-dv2xv\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882920 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58d4z\" (UniqueName: \"kubernetes.io/projected/eb8f2404-0acf-4a0a-a581-b5c767351742-kube-api-access-58d4z\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.882968 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-certs\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883002 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9055b773-45d4-411e-a3cf-f160e940b102-config-volume\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883025 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e110a18a-ce3e-4c58-b80d-9f8c9018a868-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5247b\" (UID: \"e110a18a-ce3e-4c58-b80d-9f8c9018a868\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883071 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-client\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883116 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eda9f75-a124-4cf4-ad0a-358a111fb147-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883170 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/eb8f2404-0acf-4a0a-a581-b5c767351742-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883231 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6whg9\" (UniqueName: \"kubernetes.io/projected/474a926a-06bd-4d6e-bd60-5cfd5a1fa37b-kube-api-access-6whg9\") pod \"dns-operator-744455d44c-72w7j\" (UID: \"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b\") " pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883353 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/345bf766-2671-46d4-82f0-cb0838949d52-profile-collector-cert\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883380 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-webhook-cert\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883405 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-etcd-client\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883420 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-config\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883438 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/474a926a-06bd-4d6e-bd60-5cfd5a1fa37b-metrics-tls\") pod \"dns-operator-744455d44c-72w7j\" (UID: \"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b\") " pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883525 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-signing-cabundle\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883621 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883663 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6fe29c2e-47b8-434d-a38f-8edd2992e345-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883690 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883713 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-mountpoint-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883736 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfld\" (UniqueName: \"kubernetes.io/projected/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-kube-api-access-jrfld\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883776 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-apiservice-cert\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883799 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-ca\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883836 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8283a63-3511-4508-ae32-3e5ec3488c19-config-volume\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883862 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9055b773-45d4-411e-a3cf-f160e940b102-secret-volume\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883893 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-certificates\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883916 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-signing-key\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883943 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883964 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-tmpfs\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.883993 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kwb\" (UniqueName: \"kubernetes.io/projected/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-kube-api-access-b8kwb\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884013 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-audit\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884037 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vj7\" (UniqueName: \"kubernetes.io/projected/ae751643-7d61-4b5a-9cc6-d3507a958aee-kube-api-access-28vj7\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884085 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-node-bootstrap-token\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884108 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-config\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884125 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c963c4-def0-4384-a3bc-856b4cb6ed27-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884179 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884213 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m477s\" (UniqueName: \"kubernetes.io/projected/06c68d45-b14f-447c-a680-c68502125d31-kube-api-access-m477s\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884241 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-service-ca\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884259 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghjh\" (UniqueName: \"kubernetes.io/projected/107bca77-52a4-442c-ae09-ee527f46a9c5-kube-api-access-tghjh\") pod \"ingress-canary-57fcq\" (UID: \"107bca77-52a4-442c-ae09-ee527f46a9c5\") " pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884315 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-plugins-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884338 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5qs\" (UniqueName: \"kubernetes.io/projected/37655499-3f2b-496a-a47c-cbf31b9c19ef-kube-api-access-df5qs\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884357 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/538f1baf-acb2-4a08-9baf-2d710def7477-audit-dir\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884385 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7448d7f-ba89-4749-9cf2-60e55cffd82b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vt92h\" (UID: \"e7448d7f-ba89-4749-9cf2-60e55cffd82b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.884985 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-tls\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.885030 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6fe29c2e-47b8-434d-a38f-8edd2992e345-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.885714 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-certificates\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.886368 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.887966 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6fe29c2e-47b8-434d-a38f-8edd2992e345-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.891124 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.894071 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.915145 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.921591 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" Dec 13 22:17:33 crc kubenswrapper[4866]: W1213 22:17:33.931884 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod144e732e_78b7_4e31_8f30_ed505c2ae0e9.slice/crio-12f894700dc91092ec58aa9d468b28c3a00de3e72557c36c97114a2d58646bf9 WatchSource:0}: Error finding container 12f894700dc91092ec58aa9d468b28c3a00de3e72557c36c97114a2d58646bf9: Status 404 returned error can't find the container with id 12f894700dc91092ec58aa9d468b28c3a00de3e72557c36c97114a2d58646bf9 Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.941246 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.953504 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f70cb47b-ccf8-4be2-8e8d-9be364a22205-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-55ztr\" (UID: \"f70cb47b-ccf8-4be2-8e8d-9be364a22205\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.957412 4866 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.974870 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985115 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985339 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m477s\" (UniqueName: \"kubernetes.io/projected/06c68d45-b14f-447c-a680-c68502125d31-kube-api-access-m477s\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985379 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-service-ca\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985409 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghjh\" (UniqueName: \"kubernetes.io/projected/107bca77-52a4-442c-ae09-ee527f46a9c5-kube-api-access-tghjh\") pod \"ingress-canary-57fcq\" (UID: \"107bca77-52a4-442c-ae09-ee527f46a9c5\") " pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985452 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5qs\" (UniqueName: \"kubernetes.io/projected/37655499-3f2b-496a-a47c-cbf31b9c19ef-kube-api-access-df5qs\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985475 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/538f1baf-acb2-4a08-9baf-2d710def7477-audit-dir\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.985530 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/538f1baf-acb2-4a08-9baf-2d710def7477-audit-dir\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986328 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-service-ca\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: E1213 22:17:33.986471 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.485476224 +0000 UTC m=+52.526814786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986547 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-plugins-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986599 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7448d7f-ba89-4749-9cf2-60e55cffd82b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vt92h\" (UID: \"e7448d7f-ba89-4749-9cf2-60e55cffd82b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986712 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107bca77-52a4-442c-ae09-ee527f46a9c5-cert\") pod \"ingress-canary-57fcq\" (UID: \"107bca77-52a4-442c-ae09-ee527f46a9c5\") " pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986739 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdcj9\" (UniqueName: \"kubernetes.io/projected/345bf766-2671-46d4-82f0-cb0838949d52-kube-api-access-kdcj9\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986792 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r85zh\" (UID: \"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986821 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37655499-3f2b-496a-a47c-cbf31b9c19ef-proxy-tls\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986846 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c963c4-def0-4384-a3bc-856b4cb6ed27-config\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986904 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-etcd-serving-ca\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986928 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-socket-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.986977 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33af49a4-4694-4708-9930-d445c917d6a9-srv-cert\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987005 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8kk\" (UniqueName: \"kubernetes.io/projected/e7448d7f-ba89-4749-9cf2-60e55cffd82b-kube-api-access-wp8kk\") pod \"control-plane-machine-set-operator-78cbb6b69f-vt92h\" (UID: \"e7448d7f-ba89-4749-9cf2-60e55cffd82b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987153 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj5vk\" (UniqueName: \"kubernetes.io/projected/8eda9f75-a124-4cf4-ad0a-358a111fb147-kube-api-access-kj5vk\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987212 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w47q6\" (UniqueName: \"kubernetes.io/projected/94716d23-2314-47b4-8159-f1f2f970c989-kube-api-access-w47q6\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987251 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gck\" (UniqueName: \"kubernetes.io/projected/e110a18a-ce3e-4c58-b80d-9f8c9018a868-kube-api-access-z2gck\") pod \"multus-admission-controller-857f4d67dd-5247b\" (UID: \"e110a18a-ce3e-4c58-b80d-9f8c9018a868\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987299 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-csi-data-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987326 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2404-0acf-4a0a-a581-b5c767351742-ready\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987371 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-serving-cert\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987396 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmjj4\" (UniqueName: \"kubernetes.io/projected/9055b773-45d4-411e-a3cf-f160e940b102-kube-api-access-hmjj4\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987443 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzlc\" (UniqueName: \"kubernetes.io/projected/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-kube-api-access-7zzlc\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987470 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06c68d45-b14f-447c-a680-c68502125d31-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987495 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbqxt\" (UniqueName: \"kubernetes.io/projected/538f1baf-acb2-4a08-9baf-2d710def7477-kube-api-access-rbqxt\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987539 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczfq\" (UniqueName: \"kubernetes.io/projected/33af49a4-4694-4708-9930-d445c917d6a9-kube-api-access-fczfq\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987563 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-serving-cert\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987605 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/345bf766-2671-46d4-82f0-cb0838949d52-srv-cert\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987628 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6bt\" (UniqueName: \"kubernetes.io/projected/a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2-kube-api-access-sn6bt\") pod \"package-server-manager-789f6589d5-r85zh\" (UID: \"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987648 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-image-import-ca\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987690 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-config\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.987714 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eda9f75-a124-4cf4-ad0a-358a111fb147-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.988880 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2404-0acf-4a0a-a581-b5c767351742-ready\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.989005 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-plugins-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.989023 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-csi-data-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.989673 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c963c4-def0-4384-a3bc-856b4cb6ed27-config\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.990427 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.990612 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06c68d45-b14f-447c-a680-c68502125d31-proxy-tls\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.990747 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37655499-3f2b-496a-a47c-cbf31b9c19ef-images\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.990889 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tcm8\" (UniqueName: \"kubernetes.io/projected/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-kube-api-access-5tcm8\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.990558 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-etcd-serving-ca\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.991569 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-config\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.992185 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eda9f75-a124-4cf4-ad0a-358a111fb147-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.992348 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06c68d45-b14f-447c-a680-c68502125d31-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:33 crc kubenswrapper[4866]: E1213 22:17:33.992509 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.492495829 +0000 UTC m=+52.533834381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.992597 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-image-import-ca\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993632 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37655499-3f2b-496a-a47c-cbf31b9c19ef-images\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993684 4866 request.go:700] Waited for 1.954634592s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993766 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-encryption-config\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993852 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993885 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnv6x\" (UniqueName: \"kubernetes.io/projected/d8283a63-3511-4508-ae32-3e5ec3488c19-kube-api-access-lnv6x\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993929 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpdl9\" (UniqueName: \"kubernetes.io/projected/f607d834-3c1c-45e8-8208-19ed5aca7e95-kube-api-access-hpdl9\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993953 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c963c4-def0-4384-a3bc-856b4cb6ed27-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.993974 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8283a63-3511-4508-ae32-3e5ec3488c19-metrics-tls\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994014 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f607d834-3c1c-45e8-8208-19ed5aca7e95-metrics-tls\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994031 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f607d834-3c1c-45e8-8208-19ed5aca7e95-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994074 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae751643-7d61-4b5a-9cc6-d3507a958aee-serving-cert\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994094 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-registration-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994121 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/538f1baf-acb2-4a08-9baf-2d710def7477-node-pullsecrets\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994240 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f607d834-3c1c-45e8-8208-19ed5aca7e95-trusted-ca\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994268 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37655499-3f2b-496a-a47c-cbf31b9c19ef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994311 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33af49a4-4694-4708-9930-d445c917d6a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994332 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnswr\" (UniqueName: \"kubernetes.io/projected/87bf1879-724a-4a9e-b4c8-d5de32457782-kube-api-access-gnswr\") pod \"migrator-59844c95c7-9nvcf\" (UID: \"87bf1879-724a-4a9e-b4c8-d5de32457782\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994348 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb8f2404-0acf-4a0a-a581-b5c767351742-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994398 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58d4z\" (UniqueName: \"kubernetes.io/projected/eb8f2404-0acf-4a0a-a581-b5c767351742-kube-api-access-58d4z\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994418 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-certs\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994434 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e110a18a-ce3e-4c58-b80d-9f8c9018a868-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5247b\" (UID: \"e110a18a-ce3e-4c58-b80d-9f8c9018a868\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.994450 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-client\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996605 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eda9f75-a124-4cf4-ad0a-358a111fb147-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996634 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9055b773-45d4-411e-a3cf-f160e940b102-config-volume\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996660 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/eb8f2404-0acf-4a0a-a581-b5c767351742-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996681 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6whg9\" (UniqueName: \"kubernetes.io/projected/474a926a-06bd-4d6e-bd60-5cfd5a1fa37b-kube-api-access-6whg9\") pod \"dns-operator-744455d44c-72w7j\" (UID: \"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b\") " pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996698 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/345bf766-2671-46d4-82f0-cb0838949d52-profile-collector-cert\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996714 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-etcd-client\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996729 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-config\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996745 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/474a926a-06bd-4d6e-bd60-5cfd5a1fa37b-metrics-tls\") pod \"dns-operator-744455d44c-72w7j\" (UID: \"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b\") " pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996762 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-webhook-cert\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996779 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-signing-cabundle\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996812 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996828 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-mountpoint-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996845 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrfld\" (UniqueName: \"kubernetes.io/projected/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-kube-api-access-jrfld\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996862 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-apiservice-cert\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996877 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-ca\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996903 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8283a63-3511-4508-ae32-3e5ec3488c19-config-volume\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996919 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9055b773-45d4-411e-a3cf-f160e940b102-secret-volume\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996936 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-signing-key\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996977 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.996994 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-tmpfs\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.997016 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kwb\" (UniqueName: \"kubernetes.io/projected/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-kube-api-access-b8kwb\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.997148 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-audit\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.990971 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-socket-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:33 crc kubenswrapper[4866]: I1213 22:17:33.998001 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f607d834-3c1c-45e8-8208-19ed5aca7e95-trusted-ca\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.008502 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-encryption-config\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.009228 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae751643-7d61-4b5a-9cc6-d3507a958aee-serving-cert\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.009408 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28vj7\" (UniqueName: \"kubernetes.io/projected/ae751643-7d61-4b5a-9cc6-d3507a958aee-kube-api-access-28vj7\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.012772 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37655499-3f2b-496a-a47c-cbf31b9c19ef-proxy-tls\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.014598 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37655499-3f2b-496a-a47c-cbf31b9c19ef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.015892 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-ca\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.016401 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/345bf766-2671-46d4-82f0-cb0838949d52-srv-cert\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.016717 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8283a63-3511-4508-ae32-3e5ec3488c19-metrics-tls\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.016954 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.017015 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-mountpoint-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.017771 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94716d23-2314-47b4-8159-f1f2f970c989-registration-dir\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.017976 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb8f2404-0acf-4a0a-a581-b5c767351742-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:33.999283 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7448d7f-ba89-4749-9cf2-60e55cffd82b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vt92h\" (UID: \"e7448d7f-ba89-4749-9cf2-60e55cffd82b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.019786 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8283a63-3511-4508-ae32-3e5ec3488c19-config-volume\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.020136 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-tmpfs\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.020637 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/538f1baf-acb2-4a08-9baf-2d710def7477-audit\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:33.995405 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06c68d45-b14f-447c-a680-c68502125d31-proxy-tls\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:33.995785 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/538f1baf-acb2-4a08-9baf-2d710def7477-node-pullsecrets\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.027286 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9055b773-45d4-411e-a3cf-f160e940b102-config-volume\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:33.995937 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r85zh\" (UID: \"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.027789 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/eb8f2404-0acf-4a0a-a581-b5c767351742-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:33.996243 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33af49a4-4694-4708-9930-d445c917d6a9-srv-cert\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.032013 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-serving-cert\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.034307 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eda9f75-a124-4cf4-ad0a-358a111fb147-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.038593 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-serving-cert\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.039164 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/345bf766-2671-46d4-82f0-cb0838949d52-profile-collector-cert\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.040574 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75g6q\" (UniqueName: \"kubernetes.io/projected/0f6a72e7-f4e0-4628-91ad-c3f81514f9f9-kube-api-access-75g6q\") pod \"console-f9d7485db-zp55r\" (UID: \"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9\") " pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.041021 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-apiservice-cert\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.041747 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-signing-key\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.042240 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9055b773-45d4-411e-a3cf-f160e940b102-secret-volume\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.043387 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae751643-7d61-4b5a-9cc6-d3507a958aee-config\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.043507 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-node-bootstrap-token\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.046733 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c963c4-def0-4384-a3bc-856b4cb6ed27-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.047916 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/538f1baf-acb2-4a08-9baf-2d710def7477-etcd-client\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.049452 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33af49a4-4694-4708-9930-d445c917d6a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.050528 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-signing-cabundle\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.051485 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.051871 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/474a926a-06bd-4d6e-bd60-5cfd5a1fa37b-metrics-tls\") pod \"dns-operator-744455d44c-72w7j\" (UID: \"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b\") " pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.062345 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.064156 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e110a18a-ce3e-4c58-b80d-9f8c9018a868-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5247b\" (UID: \"e110a18a-ce3e-4c58-b80d-9f8c9018a868\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.065873 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-config\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.067399 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-config\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.067904 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.077896 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f607d834-3c1c-45e8-8208-19ed5aca7e95-metrics-tls\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.078272 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-webhook-cert\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.078940 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-node-bootstrap-token\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.079394 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae751643-7d61-4b5a-9cc6-d3507a958aee-etcd-client\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.079961 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-certs\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.090795 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h557w\" (UniqueName: \"kubernetes.io/projected/0771c5ba-de65-4932-a37e-b21a2337f265-kube-api-access-h557w\") pod \"cluster-samples-operator-665b6dd947-vvnn7\" (UID: \"0771c5ba-de65-4932-a37e-b21a2337f265\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.092088 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svrf6\" (UniqueName: \"kubernetes.io/projected/a3f3a62c-f055-4e72-8b85-47b8d577ca3a-kube-api-access-svrf6\") pod \"apiserver-7bbb656c7d-qcsh9\" (UID: \"a3f3a62c-f055-4e72-8b85-47b8d577ca3a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.095267 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.098217 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c963c4-def0-4384-a3bc-856b4cb6ed27-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.099697 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7v2j\" (UniqueName: \"kubernetes.io/projected/b24e9c98-6286-4e1c-82ff-7048e879d889-kube-api-access-c7v2j\") pod \"openshift-apiserver-operator-796bbdcf4f-nnz5c\" (UID: \"b24e9c98-6286-4e1c-82ff-7048e879d889\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.105914 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvbwg\" (UniqueName: \"kubernetes.io/projected/f12ed525-d75a-402f-b6c5-ce6298cb98f1-kube-api-access-rvbwg\") pod \"router-default-5444994796-wbx6k\" (UID: \"f12ed525-d75a-402f-b6c5-ce6298cb98f1\") " pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.114266 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.118226 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.135210 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.145026 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107bca77-52a4-442c-ae09-ee527f46a9c5-cert\") pod \"ingress-canary-57fcq\" (UID: \"107bca77-52a4-442c-ae09-ee527f46a9c5\") " pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.155081 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.157443 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.166544 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.166686 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.167262 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.66724547 +0000 UTC m=+52.708584022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.181634 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.182326 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.195529 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.200193 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.215517 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.231304 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.238921 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.246600 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d"] Dec 13 22:17:34 crc kubenswrapper[4866]: W1213 22:17:34.251780 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf12ed525_d75a_402f_b6c5_ce6298cb98f1.slice/crio-a1ff922763bce7d4529305ab316052e2a4fd6d57f22ecc6756de07c724a33db2 WatchSource:0}: Error finding container a1ff922763bce7d4529305ab316052e2a4fd6d57f22ecc6756de07c724a33db2: Status 404 returned error can't find the container with id a1ff922763bce7d4529305ab316052e2a4fd6d57f22ecc6756de07c724a33db2 Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.268384 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.268775 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.768758327 +0000 UTC m=+52.810096879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.276352 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.285083 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b8290a2-1abc-4343-8aa9-27a3f16f64f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ppg68\" (UID: \"6b8290a2-1abc-4343-8aa9-27a3f16f64f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.291469 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.297958 4866 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.314985 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 13 22:17:34 crc kubenswrapper[4866]: W1213 22:17:34.319468 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2401c59_016a_4806_ac04_bbea98467a21.slice/crio-0a1f760b51f50572cafc2ff380698bbab695cb2a3e166ada8f185f19e8977a3b WatchSource:0}: Error finding container 0a1f760b51f50572cafc2ff380698bbab695cb2a3e166ada8f185f19e8977a3b: Status 404 returned error can't find the container with id 0a1f760b51f50572cafc2ff380698bbab695cb2a3e166ada8f185f19e8977a3b Dec 13 22:17:34 crc kubenswrapper[4866]: W1213 22:17:34.320203 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2b705b1_0852_447c_9bed_e23342613e1f.slice/crio-0f884caf810ebf7b2e1b12a732a21a0c8c1e9eae587da26a51eb8e8ac08555e4 WatchSource:0}: Error finding container 0f884caf810ebf7b2e1b12a732a21a0c8c1e9eae587da26a51eb8e8ac08555e4: Status 404 returned error can't find the container with id 0f884caf810ebf7b2e1b12a732a21a0c8c1e9eae587da26a51eb8e8ac08555e4 Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.336198 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.367349 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tlbgv"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.376126 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.372571 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.872559249 +0000 UTC m=+52.913897801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.372503 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.377180 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.377674 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.877661399 +0000 UTC m=+52.918999951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.407522 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpwh\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-kube-api-access-5dpwh\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.408118 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6svzb"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.417150 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2688"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.420791 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-bound-sa-token\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.437749 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.441841 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2xv\" (UniqueName: \"kubernetes.io/projected/a82c92c0-47dc-4f29-8aa0-304a9f34f728-kube-api-access-dv2xv\") pod \"marketplace-operator-79b997595-brrt8\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.451327 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.453300 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m477s\" (UniqueName: \"kubernetes.io/projected/06c68d45-b14f-447c-a680-c68502125d31-kube-api-access-m477s\") pod \"machine-config-controller-84d6567774-jksbt\" (UID: \"06c68d45-b14f-447c-a680-c68502125d31\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:34 crc kubenswrapper[4866]: W1213 22:17:34.469849 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336a2b00_9af4_4f90_88a9_7920886c82ca.slice/crio-7ff3eac0e0424a07d624cf0a1e59f678125fdb176b5824f758a8ec5793e2ba1a WatchSource:0}: Error finding container 7ff3eac0e0424a07d624cf0a1e59f678125fdb176b5824f758a8ec5793e2ba1a: Status 404 returned error can't find the container with id 7ff3eac0e0424a07d624cf0a1e59f678125fdb176b5824f758a8ec5793e2ba1a Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.473732 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghjh\" (UniqueName: \"kubernetes.io/projected/107bca77-52a4-442c-ae09-ee527f46a9c5-kube-api-access-tghjh\") pod \"ingress-canary-57fcq\" (UID: \"107bca77-52a4-442c-ae09-ee527f46a9c5\") " pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.477994 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.478605 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:34.978591303 +0000 UTC m=+53.019929855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.498301 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5qs\" (UniqueName: \"kubernetes.io/projected/37655499-3f2b-496a-a47c-cbf31b9c19ef-kube-api-access-df5qs\") pod \"machine-config-operator-74547568cd-h29td\" (UID: \"37655499-3f2b-496a-a47c-cbf31b9c19ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.503758 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.514219 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w47q6\" (UniqueName: \"kubernetes.io/projected/94716d23-2314-47b4-8159-f1f2f970c989-kube-api-access-w47q6\") pod \"csi-hostpathplugin-zgx58\" (UID: \"94716d23-2314-47b4-8159-f1f2f970c989\") " pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.532427 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gck\" (UniqueName: \"kubernetes.io/projected/e110a18a-ce3e-4c58-b80d-9f8c9018a868-kube-api-access-z2gck\") pod \"multus-admission-controller-857f4d67dd-5247b\" (UID: \"e110a18a-ce3e-4c58-b80d-9f8c9018a868\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.543431 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zp55r"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.559927 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmjj4\" (UniqueName: \"kubernetes.io/projected/9055b773-45d4-411e-a3cf-f160e940b102-kube-api-access-hmjj4\") pod \"collect-profiles-29427735-h22d8\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.571752 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8kk\" (UniqueName: \"kubernetes.io/projected/e7448d7f-ba89-4749-9cf2-60e55cffd82b-kube-api-access-wp8kk\") pod \"control-plane-machine-set-operator-78cbb6b69f-vt92h\" (UID: \"e7448d7f-ba89-4749-9cf2-60e55cffd82b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.579929 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.580230 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.080218644 +0000 UTC m=+53.121557196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.594191 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.601991 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-57fcq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.602461 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.602477 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.602970 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdcj9\" (UniqueName: \"kubernetes.io/projected/345bf766-2671-46d4-82f0-cb0838949d52-kube-api-access-kdcj9\") pod \"catalog-operator-68c6474976-qcftv\" (UID: \"345bf766-2671-46d4-82f0-cb0838949d52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.615371 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" event={"ID":"9b47c24d-6353-449a-b61c-60672ef29fbd","Type":"ContainerStarted","Data":"99b899cf0fdfaecca0d18373facf0de8363e55e838948df20ffb1e230e26add5"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.615418 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" event={"ID":"9b47c24d-6353-449a-b61c-60672ef29fbd","Type":"ContainerStarted","Data":"c855c98bc13d6734a9eaf7ce81a7866667a9daad4ebebec93f25849676100e50"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.616448 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.617551 4866 patch_prober.go:28] interesting pod/console-operator-58897d9998-s4jkf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.617600 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" podUID="9b47c24d-6353-449a-b61c-60672ef29fbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.622526 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.625240 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzlc\" (UniqueName: \"kubernetes.io/projected/ff0c44c6-31b2-46c3-a5b7-51c10ed59236-kube-api-access-7zzlc\") pod \"packageserver-d55dfcdfc-8p9hb\" (UID: \"ff0c44c6-31b2-46c3-a5b7-51c10ed59236\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.626587 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wbx6k" event={"ID":"f12ed525-d75a-402f-b6c5-ce6298cb98f1","Type":"ContainerStarted","Data":"a1ff922763bce7d4529305ab316052e2a4fd6d57f22ecc6756de07c724a33db2"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.631353 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" event={"ID":"be954a83-6cf4-4f06-9de5-0540e967cfe9","Type":"ContainerStarted","Data":"0b75677b4eb901a82df99ad1a62550b0f79d7473cbbfc24bfd2eb7f1d49092f8"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.641445 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj5vk\" (UniqueName: \"kubernetes.io/projected/8eda9f75-a124-4cf4-ad0a-358a111fb147-kube-api-access-kj5vk\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9nm2\" (UID: \"8eda9f75-a124-4cf4-ad0a-358a111fb147\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.642010 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tlbgv" event={"ID":"b5141b6f-cd05-4499-ae0d-cdd14f3f5a61","Type":"ContainerStarted","Data":"dec1c58718a33aefbbadc4ffaef03d7f01e55378515550f6be2a526d0c4b6e69"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.649575 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" event={"ID":"e2401c59-016a-4806-ac04-bbea98467a21","Type":"ContainerStarted","Data":"0a1f760b51f50572cafc2ff380698bbab695cb2a3e166ada8f185f19e8977a3b"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.657353 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.665436 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" event={"ID":"73e28583-3224-4f65-a4c6-c1aee16deda8","Type":"ContainerStarted","Data":"9509d1691d31367e6995aaec7ea97ecc3f7fd5ae366dbba40f67ee6b589438da"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.665478 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" event={"ID":"73e28583-3224-4f65-a4c6-c1aee16deda8","Type":"ContainerStarted","Data":"95a1540e964c2fd6941dccf15bbb9eba16c66117052d07b800a2f832e9126ff0"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.668232 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.668874 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.680102 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbqxt\" (UniqueName: \"kubernetes.io/projected/538f1baf-acb2-4a08-9baf-2d710def7477-kube-api-access-rbqxt\") pod \"apiserver-76f77b778f-wj4kw\" (UID: \"538f1baf-acb2-4a08-9baf-2d710def7477\") " pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.680596 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.682116 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.682234 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.182218834 +0000 UTC m=+53.223557386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.682663 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.682925 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.18291814 +0000 UTC m=+53.224256692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.692500 4866 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-69pzm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.692571 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" podUID="73e28583-3224-4f65-a4c6-c1aee16deda8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.693540 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczfq\" (UniqueName: \"kubernetes.io/projected/33af49a4-4694-4708-9930-d445c917d6a9-kube-api-access-fczfq\") pod \"olm-operator-6b444d44fb-7fjbc\" (UID: \"33af49a4-4694-4708-9930-d445c917d6a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.697701 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" event={"ID":"336a2b00-9af4-4f90-88a9-7920886c82ca","Type":"ContainerStarted","Data":"7ff3eac0e0424a07d624cf0a1e59f678125fdb176b5824f758a8ec5793e2ba1a"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.699108 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6bt\" (UniqueName: \"kubernetes.io/projected/a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2-kube-api-access-sn6bt\") pod \"package-server-manager-789f6589d5-r85zh\" (UID: \"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.704228 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.704600 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" event={"ID":"144e732e-78b7-4e31-8f30-ed505c2ae0e9","Type":"ContainerStarted","Data":"49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.704653 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" event={"ID":"144e732e-78b7-4e31-8f30-ed505c2ae0e9","Type":"ContainerStarted","Data":"12f894700dc91092ec58aa9d468b28c3a00de3e72557c36c97114a2d58646bf9"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.710363 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.713554 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.715752 4866 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fjf67 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.715792 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" podUID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.719963 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" event={"ID":"5a10e286-d082-421b-b0c0-a17de2547023","Type":"ContainerStarted","Data":"ea13d6a9fbf94e0255973df576c259cf03807c4b1b027f5d881f335bbdf5b6fa"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.720004 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" event={"ID":"5a10e286-d082-421b-b0c0-a17de2547023","Type":"ContainerStarted","Data":"2ed0941bb42177a58b492022820b8d55edfb2ae5113a0db15aad6c0d2a4de496"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.731745 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" event={"ID":"b24e9c98-6286-4e1c-82ff-7048e879d889","Type":"ContainerStarted","Data":"b7397556601a7b8a52093d96f639170fe0b9d90cbe39bcc5128c951e22006558"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.736533 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6aebd4d-edf5-4874-8eb1-e1e7504fcd73-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hp9fq\" (UID: \"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.737225 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" event={"ID":"c2b705b1-0852-447c-9bed-e23342613e1f","Type":"ContainerStarted","Data":"0f884caf810ebf7b2e1b12a732a21a0c8c1e9eae587da26a51eb8e8ac08555e4"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.739113 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zp55r" event={"ID":"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9","Type":"ContainerStarted","Data":"bed412ef9e38cb0f1f0fd84321bc5266fa4d99d3d5c96eca49b9b8d5503e7964"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.743667 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" event={"ID":"b601d73e-8fa8-4ece-8141-6a289f058547","Type":"ContainerStarted","Data":"b6b895bf6f857dc20033aa4fc040d9711f20ff35afef198919216a3830520108"} Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.749366 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tcm8\" (UniqueName: \"kubernetes.io/projected/c65b13fe-f958-4b3e-8c79-c7f4028abaa7-kube-api-access-5tcm8\") pod \"machine-config-server-dc7kh\" (UID: \"c65b13fe-f958-4b3e-8c79-c7f4028abaa7\") " pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.766283 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnv6x\" (UniqueName: \"kubernetes.io/projected/d8283a63-3511-4508-ae32-3e5ec3488c19-kube-api-access-lnv6x\") pod \"dns-default-c5wt4\" (UID: \"d8283a63-3511-4508-ae32-3e5ec3488c19\") " pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.766791 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.778738 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpdl9\" (UniqueName: \"kubernetes.io/projected/f607d834-3c1c-45e8-8208-19ed5aca7e95-kube-api-access-hpdl9\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.785616 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.787300 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.790015 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.289998429 +0000 UTC m=+53.331336981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.799678 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c963c4-def0-4384-a3bc-856b4cb6ed27-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ps9gj\" (UID: \"35c963c4-def0-4384-a3bc-856b4cb6ed27\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.822474 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.841621 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f607d834-3c1c-45e8-8208-19ed5aca7e95-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fw6xl\" (UID: \"f607d834-3c1c-45e8-8208-19ed5aca7e95\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.844739 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.853492 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.853735 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrfld\" (UniqueName: \"kubernetes.io/projected/c5a9f2a1-3ea0-4386-b3d8-483570ff066d-kube-api-access-jrfld\") pod \"service-ca-operator-777779d784-c64bv\" (UID: \"c5a9f2a1-3ea0-4386-b3d8-483570ff066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.862202 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ppg68"] Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.862581 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.865751 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnswr\" (UniqueName: \"kubernetes.io/projected/87bf1879-724a-4a9e-b4c8-d5de32457782-kube-api-access-gnswr\") pod \"migrator-59844c95c7-9nvcf\" (UID: \"87bf1879-724a-4a9e-b4c8-d5de32457782\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.877579 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.885586 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kwb\" (UniqueName: \"kubernetes.io/projected/92fa87fc-9ff3-4add-b542-4a936c6d8d7a-kube-api-access-b8kwb\") pod \"service-ca-9c57cc56f-g9k2k\" (UID: \"92fa87fc-9ff3-4add-b542-4a936c6d8d7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.888262 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.889012 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.889341 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.389330736 +0000 UTC m=+53.430669288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.907098 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.930623 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58d4z\" (UniqueName: \"kubernetes.io/projected/eb8f2404-0acf-4a0a-a581-b5c767351742-kube-api-access-58d4z\") pod \"cni-sysctl-allowlist-ds-kg5zb\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.933790 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.942678 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6whg9\" (UniqueName: \"kubernetes.io/projected/474a926a-06bd-4d6e-bd60-5cfd5a1fa37b-kube-api-access-6whg9\") pod \"dns-operator-744455d44c-72w7j\" (UID: \"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b\") " pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.946884 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.962420 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.981092 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28vj7\" (UniqueName: \"kubernetes.io/projected/ae751643-7d61-4b5a-9cc6-d3507a958aee-kube-api-access-28vj7\") pod \"etcd-operator-b45778765-dmd7t\" (UID: \"ae751643-7d61-4b5a-9cc6-d3507a958aee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.993501 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:34 crc kubenswrapper[4866]: E1213 22:17:34.993847 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.493810673 +0000 UTC m=+53.535149225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.995021 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:34 crc kubenswrapper[4866]: I1213 22:17:34.995666 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dc7kh" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.022167 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-brrt8"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.025635 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.095836 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.096377 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.596362466 +0000 UTC m=+53.637701018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: W1213 22:17:35.104213 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3f3a62c_f055_4e72_8b85_47b8d577ca3a.slice/crio-80f858b07032b217041ee475f3df4bb06d12c5fe36d54056dead976d1aefd965 WatchSource:0}: Error finding container 80f858b07032b217041ee475f3df4bb06d12c5fe36d54056dead976d1aefd965: Status 404 returned error can't find the container with id 80f858b07032b217041ee475f3df4bb06d12c5fe36d54056dead976d1aefd965 Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.168952 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.196576 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.200164 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.700144037 +0000 UTC m=+53.741482609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.211110 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.212238 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.297959 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.298257 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.798242425 +0000 UTC m=+53.839580987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.343698 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-57fcq"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.398670 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.399021 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:35.898994275 +0000 UTC m=+53.940332827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.489964 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h29td"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.515177 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.515691 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.01567376 +0000 UTC m=+54.057012312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.552221 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zgx58"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.619662 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.620033 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.120017765 +0000 UTC m=+54.161356317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.721008 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.721279 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.221267936 +0000 UTC m=+54.262606488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.743205 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.762881 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5247b"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.828134 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.828294 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.328266103 +0000 UTC m=+54.369604655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.828529 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.828627 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.828926 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.328915209 +0000 UTC m=+54.370253761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.843546 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.851527 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7692f7-4101-4c41-86f0-d8c2883110bf-metrics-certs\") pod \"network-metrics-daemon-sdd5b\" (UID: \"1d7692f7-4101-4c41-86f0-d8c2883110bf\") " pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.881290 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wbx6k" event={"ID":"f12ed525-d75a-402f-b6c5-ce6298cb98f1","Type":"ContainerStarted","Data":"3a05a242f06f0c52116005c49a6573cba088e6a80c52d039cd36845dd5be8b8b"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.908638 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" event={"ID":"f70cb47b-ccf8-4be2-8e8d-9be364a22205","Type":"ContainerStarted","Data":"6585af047da865b648ff54d420fc27a6b99c318e7cea5c53bb25f1b44644705e"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.909450 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" event={"ID":"9055b773-45d4-411e-a3cf-f160e940b102","Type":"ContainerStarted","Data":"421b4b4761e62cd617db416c39082a54a14684cc1f3d3e9423ac36fa24487316"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.910400 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" event={"ID":"e2401c59-016a-4806-ac04-bbea98467a21","Type":"ContainerStarted","Data":"31f820e313afc8778f82a5641e21578f408a0ed968953584de1774d10ffcbe23"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.911680 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tlbgv" event={"ID":"b5141b6f-cd05-4499-ae0d-cdd14f3f5a61","Type":"ContainerStarted","Data":"73c25970dea8e00f1c145ad18221d0f77de2a78804bc1e00fa410ed24d79f0de"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.912285 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.916183 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.916224 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.928349 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" event={"ID":"a82c92c0-47dc-4f29-8aa0-304a9f34f728","Type":"ContainerStarted","Data":"ed73c08f9dcdc0db47230a679668ce4d8b2fb0923233590cd9edc4002bf4dffa"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.930445 4866 generic.go:334] "Generic (PLEG): container finished" podID="336a2b00-9af4-4f90-88a9-7920886c82ca" containerID="df0936e3ee7778db0fb308d3931909d1794494408944623bf5ba38efbe78a577" exitCode=0 Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.930629 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" event={"ID":"336a2b00-9af4-4f90-88a9-7920886c82ca","Type":"ContainerDied","Data":"df0936e3ee7778db0fb308d3931909d1794494408944623bf5ba38efbe78a577"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.934451 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:35 crc kubenswrapper[4866]: E1213 22:17:35.935344 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.435315732 +0000 UTC m=+54.476654284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.935994 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" event={"ID":"c2b705b1-0852-447c-9bed-e23342613e1f","Type":"ContainerStarted","Data":"158683b1ffd3305e624f9735916426a9b9154e7c2d66631f4c382b6b4203cba1"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.942398 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb"] Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.943970 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" event={"ID":"37655499-3f2b-496a-a47c-cbf31b9c19ef","Type":"ContainerStarted","Data":"d6455a4d31d9a9e82ffdd47cfdc701b6388d2f64a97163b4fee068f982e0485e"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.975889 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" event={"ID":"eb8f2404-0acf-4a0a-a581-b5c767351742","Type":"ContainerStarted","Data":"8d82516481abbca7404ebeaca0fe2a5f161163e0fb403bb6d23c7058f1cebf1a"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.976998 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" event={"ID":"b24e9c98-6286-4e1c-82ff-7048e879d889","Type":"ContainerStarted","Data":"46c7ccd4c0a1603542c27e82de416d4406b03e65c927101fad25451f3c252814"} Dec 13 22:17:35 crc kubenswrapper[4866]: I1213 22:17:35.994958 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdd5b" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.016434 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" event={"ID":"be954a83-6cf4-4f06-9de5-0540e967cfe9","Type":"ContainerStarted","Data":"e38c008ded3b789f11f0a7bdf80d7ea252361bbd658f54813f58b3cee7183515"} Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.017061 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.017551 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-s54hq" podStartSLOduration=31.017539346 podStartE2EDuration="31.017539346s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.016542202 +0000 UTC m=+54.057880754" watchObservedRunningTime="2025-12-13 22:17:36.017539346 +0000 UTC m=+54.058877898" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.020134 4866 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b2688 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.020171 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" podUID="be954a83-6cf4-4f06-9de5-0540e967cfe9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.032666 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" event={"ID":"94716d23-2314-47b4-8159-f1f2f970c989","Type":"ContainerStarted","Data":"f0b71788af1aa421365bc11c1a7782d7acdbbd0a270bac94d50427686a760c35"} Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.037609 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.039171 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.539153594 +0000 UTC m=+54.580492216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.042336 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zp55r" event={"ID":"0f6a72e7-f4e0-4628-91ad-c3f81514f9f9","Type":"ContainerStarted","Data":"353d265e39d19dc5f55d68aa4e6abf086e14de0bf6837ac8c27ae38466bae5dd"} Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.046725 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-57fcq" event={"ID":"107bca77-52a4-442c-ae09-ee527f46a9c5","Type":"ContainerStarted","Data":"6b6a45b6b14384c36c0cba8c5b8e5630132ca4ff5316b2eec43203354d38151a"} Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.050767 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.051399 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" event={"ID":"a3f3a62c-f055-4e72-8b85-47b8d577ca3a","Type":"ContainerStarted","Data":"80f858b07032b217041ee475f3df4bb06d12c5fe36d54056dead976d1aefd965"} Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.057115 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" event={"ID":"6b8290a2-1abc-4343-8aa9-27a3f16f64f7","Type":"ContainerStarted","Data":"ed2e56191a74efedac8a97719007338ac9fea27405f1c563dcc554502fa03320"} Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.063243 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.101636 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wj4kw"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.112498 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.112874 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" podStartSLOduration=31.112854768 podStartE2EDuration="31.112854768s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.110151674 +0000 UTC m=+54.151490256" watchObservedRunningTime="2025-12-13 22:17:36.112854768 +0000 UTC m=+54.154193320" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.139335 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.148149 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.151151 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.651126518 +0000 UTC m=+54.692465070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.167867 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.192946 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.692932962 +0000 UTC m=+54.734271504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.239769 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.268752 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.269172 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.769156925 +0000 UTC m=+54.810495477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.306431 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:36 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:36 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:36 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.306772 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.352667 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" podStartSLOduration=30.352650799 podStartE2EDuration="30.352650799s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.339388037 +0000 UTC m=+54.380726589" watchObservedRunningTime="2025-12-13 22:17:36.352650799 +0000 UTC m=+54.393989351" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.369661 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.370003 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.869990887 +0000 UTC m=+54.911329439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: W1213 22:17:36.389926 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538f1baf_acb2_4a08_9baf_2d710def7477.slice/crio-6159e286243f18d4144f4deacb904bdd47c2a210f265ab425ee92078ed641648 WatchSource:0}: Error finding container 6159e286243f18d4144f4deacb904bdd47c2a210f265ab425ee92078ed641648: Status 404 returned error can't find the container with id 6159e286243f18d4144f4deacb904bdd47c2a210f265ab425ee92078ed641648 Dec 13 22:17:36 crc kubenswrapper[4866]: W1213 22:17:36.410960 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eda9f75_a124_4cf4_ad0a_358a111fb147.slice/crio-cdc3c933392ce5f06b46c69d0ef719112da9094ea54f04813b1661fa67052280 WatchSource:0}: Error finding container cdc3c933392ce5f06b46c69d0ef719112da9094ea54f04813b1661fa67052280: Status 404 returned error can't find the container with id cdc3c933392ce5f06b46c69d0ef719112da9094ea54f04813b1661fa67052280 Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.474264 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.474617 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:36.974603068 +0000 UTC m=+55.015941620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.477309 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.516499 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.530037 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-s4jkf" podStartSLOduration=30.530020252 podStartE2EDuration="30.530020252s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.528100827 +0000 UTC m=+54.569439379" watchObservedRunningTime="2025-12-13 22:17:36.530020252 +0000 UTC m=+54.571358804" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.576955 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.577467 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.077456998 +0000 UTC m=+55.118795550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.643696 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c5wt4"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.645452 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.660233 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c64bv"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.679767 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.680071 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.180032851 +0000 UTC m=+55.221371403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.720770 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.720755219 podStartE2EDuration="2.720755219s" podCreationTimestamp="2025-12-13 22:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.719846907 +0000 UTC m=+54.761185459" watchObservedRunningTime="2025-12-13 22:17:36.720755219 +0000 UTC m=+54.762093771" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.780796 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.781185 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.28116843 +0000 UTC m=+55.322506982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.890247 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.891373 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.891857 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.391841643 +0000 UTC m=+55.433180195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.912680 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sdd5b"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.917803 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnz5c" podStartSLOduration=31.917784523 podStartE2EDuration="31.917784523s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.915702685 +0000 UTC m=+54.957041237" watchObservedRunningTime="2025-12-13 22:17:36.917784523 +0000 UTC m=+54.959123075" Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.976095 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" podStartSLOduration=30.976080235 podStartE2EDuration="30.976080235s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:36.974228871 +0000 UTC m=+55.015567423" watchObservedRunningTime="2025-12-13 22:17:36.976080235 +0000 UTC m=+55.017418787" Dec 13 22:17:36 crc kubenswrapper[4866]: W1213 22:17:36.988816 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a9f2a1_3ea0_4386_b3d8_483570ff066d.slice/crio-02b425f4be5e3b5046eee97cd13c4e33101173b4e44b40272f24e7dbdd9a7dd9 WatchSource:0}: Error finding container 02b425f4be5e3b5046eee97cd13c4e33101173b4e44b40272f24e7dbdd9a7dd9: Status 404 returned error can't find the container with id 02b425f4be5e3b5046eee97cd13c4e33101173b4e44b40272f24e7dbdd9a7dd9 Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.990429 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv"] Dec 13 22:17:36 crc kubenswrapper[4866]: I1213 22:17:36.992482 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:36 crc kubenswrapper[4866]: E1213 22:17:36.992944 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.492929351 +0000 UTC m=+55.534267903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:36.993662 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj"] Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.033276 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl"] Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.040902 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wbx6k" podStartSLOduration=31.040883979 podStartE2EDuration="31.040883979s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.018199016 +0000 UTC m=+55.059537578" watchObservedRunningTime="2025-12-13 22:17:37.040883979 +0000 UTC m=+55.082222531" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.042469 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g9k2k"] Dec 13 22:17:37 crc kubenswrapper[4866]: W1213 22:17:37.092268 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d7692f7_4101_4c41_86f0_d8c2883110bf.slice/crio-334bce446e2c7ebe511e55c9345a40b1a89b78e0cbd0580dd3280531635a550f WatchSource:0}: Error finding container 334bce446e2c7ebe511e55c9345a40b1a89b78e0cbd0580dd3280531635a550f: Status 404 returned error can't find the container with id 334bce446e2c7ebe511e55c9345a40b1a89b78e0cbd0580dd3280531635a550f Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.094400 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.094834 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.594819668 +0000 UTC m=+55.636158210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.127502 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" event={"ID":"336a2b00-9af4-4f90-88a9-7920886c82ca","Type":"ContainerStarted","Data":"65c81898d769055602cf7846e333708f61f74a7b6a236bfde65cb7a26d071fc6"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.128012 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.146264 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dmd7t"] Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.157316 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" event={"ID":"e110a18a-ce3e-4c58-b80d-9f8c9018a868","Type":"ContainerStarted","Data":"0655b1296edeac1898bc5b405ca751668cfab23edac58f0ac48ace2cd6cf1563"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.187930 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" event={"ID":"ff0c44c6-31b2-46c3-a5b7-51c10ed59236","Type":"ContainerStarted","Data":"d9ba5765c15687907439d7d10d4df54826e7e3366064c6b9f2bad2bbaf34acdb"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.196962 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.198191 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.698177 +0000 UTC m=+55.739515542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.204453 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" event={"ID":"c5a9f2a1-3ea0-4386-b3d8-483570ff066d","Type":"ContainerStarted","Data":"02b425f4be5e3b5046eee97cd13c4e33101173b4e44b40272f24e7dbdd9a7dd9"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.205672 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" event={"ID":"a82c92c0-47dc-4f29-8aa0-304a9f34f728","Type":"ContainerStarted","Data":"8f45ac33445ca44bcfb9a4085594cda6d34f2026737786582a13bd893448940a"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.206358 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.215676 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" event={"ID":"33af49a4-4694-4708-9930-d445c917d6a9","Type":"ContainerStarted","Data":"baea8fb3f911ddad75b29580fd2635af7b8e0f876e74123f622f94efba7d34e7"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.216118 4866 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-brrt8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.216158 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.217516 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" event={"ID":"e7448d7f-ba89-4749-9cf2-60e55cffd82b","Type":"ContainerStarted","Data":"67dc7568ba5eb02424f87747dc8a182bc115312c58a4b6c01052b8655fa3c267"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.217535 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" event={"ID":"e7448d7f-ba89-4749-9cf2-60e55cffd82b","Type":"ContainerStarted","Data":"1d9d39de81cfa11594c1d1bc8ade8bd8f1f930f50d49204487af3d606defbd0d"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.219036 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" event={"ID":"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73","Type":"ContainerStarted","Data":"cd12681ffc666b1e11bde654c2f72b459e30992318b271606c7e0ad957c69f3e"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.220455 4866 generic.go:334] "Generic (PLEG): container finished" podID="a3f3a62c-f055-4e72-8b85-47b8d577ca3a" containerID="a914307ef674e69867d6affdd51d62de347c6067eb9e2fd17a7d5336e678fa9c" exitCode=0 Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.220500 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" event={"ID":"a3f3a62c-f055-4e72-8b85-47b8d577ca3a","Type":"ContainerDied","Data":"a914307ef674e69867d6affdd51d62de347c6067eb9e2fd17a7d5336e678fa9c"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.221777 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" event={"ID":"b601d73e-8fa8-4ece-8141-6a289f058547","Type":"ContainerStarted","Data":"a95e194cf37886131ee01f72e87ac2b34302ae973b2f8e8d599b2918f30decbe"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.228181 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tlbgv" podStartSLOduration=31.228165565 podStartE2EDuration="31.228165565s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.22540027 +0000 UTC m=+55.266738822" watchObservedRunningTime="2025-12-13 22:17:37.228165565 +0000 UTC m=+55.269504107" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.230271 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-72w7j"] Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.243101 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" event={"ID":"6b8290a2-1abc-4343-8aa9-27a3f16f64f7","Type":"ContainerStarted","Data":"20e7acbdba85f7eb9429a79da292908da2b55be83dd7109c2a6f18ed001ddbc8"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.265259 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:37 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:37 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:37 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.265313 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.271307 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vnj9d" podStartSLOduration=31.2712894 podStartE2EDuration="31.2712894s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.27002458 +0000 UTC m=+55.311363132" watchObservedRunningTime="2025-12-13 22:17:37.2712894 +0000 UTC m=+55.312627952" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.281122 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" event={"ID":"87bf1879-724a-4a9e-b4c8-d5de32457782","Type":"ContainerStarted","Data":"8c0ea196bd06af0a0d8377c935fee401b813e6c279ba2f673692a8d15327ec12"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.293376 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" event={"ID":"538f1baf-acb2-4a08-9baf-2d710def7477","Type":"ContainerStarted","Data":"6159e286243f18d4144f4deacb904bdd47c2a210f265ab425ee92078ed641648"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.300145 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.300843 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.800828144 +0000 UTC m=+55.842166696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.307209 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" event={"ID":"06c68d45-b14f-447c-a680-c68502125d31","Type":"ContainerStarted","Data":"32a10e7c758a18f0ff2cf8b8b1df7d377ee75c15c49240afb6054a69b4bc8669"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.351081 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zp55r" podStartSLOduration=31.351064436 podStartE2EDuration="31.351064436s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.350553284 +0000 UTC m=+55.391891846" watchObservedRunningTime="2025-12-13 22:17:37.351064436 +0000 UTC m=+55.392402988" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.352623 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ft9zs" podStartSLOduration=31.352613093 podStartE2EDuration="31.352613093s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.307692746 +0000 UTC m=+55.349031298" watchObservedRunningTime="2025-12-13 22:17:37.352613093 +0000 UTC m=+55.393951655" Dec 13 22:17:37 crc kubenswrapper[4866]: W1213 22:17:37.369983 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae751643_7d61_4b5a_9cc6_d3507a958aee.slice/crio-142c28bc62fc6c4bcd9644e312c5787f50d199c60e6e6948489890fe8c7b27b8 WatchSource:0}: Error finding container 142c28bc62fc6c4bcd9644e312c5787f50d199c60e6e6948489890fe8c7b27b8: Status 404 returned error can't find the container with id 142c28bc62fc6c4bcd9644e312c5787f50d199c60e6e6948489890fe8c7b27b8 Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.392550 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4vcs" podStartSLOduration=32.392513711 podStartE2EDuration="32.392513711s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.390288109 +0000 UTC m=+55.431626651" watchObservedRunningTime="2025-12-13 22:17:37.392513711 +0000 UTC m=+55.433852263" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.402913 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.403251 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:37.903235763 +0000 UTC m=+55.944574315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.409882 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" event={"ID":"f70cb47b-ccf8-4be2-8e8d-9be364a22205","Type":"ContainerStarted","Data":"8d8279252dcbd58fa566e7e8b1a6caaef10f7f751dcdc62fa4944a4ce272c3b2"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.482087 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" podStartSLOduration=31.482070698 podStartE2EDuration="31.482070698s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.419836014 +0000 UTC m=+55.461174566" watchObservedRunningTime="2025-12-13 22:17:37.482070698 +0000 UTC m=+55.523409250" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.483017 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vt92h" podStartSLOduration=31.48301202 podStartE2EDuration="31.48301202s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.482367995 +0000 UTC m=+55.523706547" watchObservedRunningTime="2025-12-13 22:17:37.48301202 +0000 UTC m=+55.524350572" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.503409 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.504528 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.004511776 +0000 UTC m=+56.045850338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.519314 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" podStartSLOduration=31.519298044 podStartE2EDuration="31.519298044s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.517470101 +0000 UTC m=+55.558808653" watchObservedRunningTime="2025-12-13 22:17:37.519298044 +0000 UTC m=+55.560636596" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.533550 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" event={"ID":"8eda9f75-a124-4cf4-ad0a-358a111fb147","Type":"ContainerStarted","Data":"cdc3c933392ce5f06b46c69d0ef719112da9094ea54f04813b1661fa67052280"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.549606 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55ztr" podStartSLOduration=31.549589786 podStartE2EDuration="31.549589786s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.549478204 +0000 UTC m=+55.590816746" watchObservedRunningTime="2025-12-13 22:17:37.549589786 +0000 UTC m=+55.590928338" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.577659 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" event={"ID":"9055b773-45d4-411e-a3cf-f160e940b102","Type":"ContainerStarted","Data":"9c3b4a2eae11ece8dc0a852479597f798fc4ab8cb8821bb8cfcfdfc583f47d6b"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.604839 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.605126 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.105113512 +0000 UTC m=+56.146452064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.627487 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c5wt4" event={"ID":"d8283a63-3511-4508-ae32-3e5ec3488c19","Type":"ContainerStarted","Data":"fd278ed20637a945d2e0eef9bddcde5bcd4bf81e0131738f3d3bfc13b9d4a6bd"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.628975 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" podStartSLOduration=31.628959903 podStartE2EDuration="31.628959903s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:37.627452728 +0000 UTC m=+55.668791280" watchObservedRunningTime="2025-12-13 22:17:37.628959903 +0000 UTC m=+55.670298455" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.645255 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dc7kh" event={"ID":"c65b13fe-f958-4b3e-8c79-c7f4028abaa7","Type":"ContainerStarted","Data":"755c12137263583059e714b797204b0fe01a49345d121944485fb70d831b25ef"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.675106 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" event={"ID":"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2","Type":"ContainerStarted","Data":"82f580f8904f1dffc004d7f3d4880fed4b693758fd6025bdd85e830465b881ea"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.693404 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" event={"ID":"0771c5ba-de65-4932-a37e-b21a2337f265","Type":"ContainerStarted","Data":"b62bef2211f6946753317f82463cd56622be699ab345d4603cfcbf356e490a88"} Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.706661 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.707791 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.207776577 +0000 UTC m=+56.249115129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.710038 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.710090 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.766266 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.809382 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.809647 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.309636083 +0000 UTC m=+56.350974635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:37 crc kubenswrapper[4866]: I1213 22:17:37.912487 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:37 crc kubenswrapper[4866]: E1213 22:17:37.912868 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.41281669 +0000 UTC m=+56.454155242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.016833 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.016892 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.017489 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.017735 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.517724498 +0000 UTC m=+56.559063050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.121117 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.121479 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.621402797 +0000 UTC m=+56.662741349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.121524 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.121599 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.121630 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.121712 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.122007 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.622001171 +0000 UTC m=+56.663339723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.159348 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.166630 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.175453 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.203302 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.203781 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.205212 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.228503 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.228691 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.72866511 +0000 UTC m=+56.770003662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.228740 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.229018 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.729005158 +0000 UTC m=+56.770343710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.238007 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:38 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:38 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:38 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.238063 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.330899 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.331333 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.831315735 +0000 UTC m=+56.872654287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.432878 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.433224 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:38.933208492 +0000 UTC m=+56.974547044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.534661 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.535019 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.035001386 +0000 UTC m=+57.076339938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.636235 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.636502 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.136490794 +0000 UTC m=+57.177829346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.739517 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.740167 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.240152192 +0000 UTC m=+57.281490744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.754175 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" event={"ID":"06c68d45-b14f-447c-a680-c68502125d31","Type":"ContainerStarted","Data":"efbbd14f779b059a87fbee180fc0c57181cb50ae117bb1d9c9ecb8c842fc0577"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.790870 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" event={"ID":"ff0c44c6-31b2-46c3-a5b7-51c10ed59236","Type":"ContainerStarted","Data":"0c65fb503c63774d0bd862a0a02c0a7c7add7b09b5721f548cfab155773cd6db"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.792517 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.793319 4866 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8p9hb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.793357 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" podUID="ff0c44c6-31b2-46c3-a5b7-51c10ed59236" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.840686 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.842002 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.341991098 +0000 UTC m=+57.383329650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.845040 4866 generic.go:334] "Generic (PLEG): container finished" podID="538f1baf-acb2-4a08-9baf-2d710def7477" containerID="a08821b0f35a352a0b5717691fce20a6bba169a17d6add8cb256dfd9c0494c02" exitCode=0 Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.845108 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" event={"ID":"538f1baf-acb2-4a08-9baf-2d710def7477","Type":"ContainerDied","Data":"a08821b0f35a352a0b5717691fce20a6bba169a17d6add8cb256dfd9c0494c02"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.851865 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" event={"ID":"a6aebd4d-edf5-4874-8eb1-e1e7504fcd73","Type":"ContainerStarted","Data":"55461230e413442e64231ca62ad98e8655192872a5862039959d00e79ff1a721"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.869292 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dc7kh" event={"ID":"c65b13fe-f958-4b3e-8c79-c7f4028abaa7","Type":"ContainerStarted","Data":"3ae6af836480a26c4c9c1becc141c1148065f327732501a2eb6da5d643b12698"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.889940 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" event={"ID":"0771c5ba-de65-4932-a37e-b21a2337f265","Type":"ContainerStarted","Data":"ac4d3d55f35beb46c2b228acca5a2a9d6aaf035a240bb93505aafe9b3bb31ffc"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.897344 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" podStartSLOduration=32.89732861 podStartE2EDuration="32.89732861s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:38.859370447 +0000 UTC m=+56.900708999" watchObservedRunningTime="2025-12-13 22:17:38.89732861 +0000 UTC m=+56.938667162" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.897888 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hp9fq" podStartSLOduration=32.897883363 podStartE2EDuration="32.897883363s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:38.895268861 +0000 UTC m=+56.936607413" watchObservedRunningTime="2025-12-13 22:17:38.897883363 +0000 UTC m=+56.939221915" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.916755 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" event={"ID":"e110a18a-ce3e-4c58-b80d-9f8c9018a868","Type":"ContainerStarted","Data":"87af5fe5938fe33632949fd6b3de7cb1150424688681f2762f5ebe78e7816b14"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.927942 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" event={"ID":"345bf766-2671-46d4-82f0-cb0838949d52","Type":"ContainerStarted","Data":"5433c3305c141ae2b5223ff9df75abb5c1a541442c12a904974916ae5db88d69"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.927994 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" event={"ID":"345bf766-2671-46d4-82f0-cb0838949d52","Type":"ContainerStarted","Data":"242bd02d5ac76fe6c4affcc3dfe90c68e77ccf4c5245f36447a9c6a127517ac1"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.929065 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.931387 4866 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcftv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.931433 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" podUID="345bf766-2671-46d4-82f0-cb0838949d52" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.945616 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:38 crc kubenswrapper[4866]: E1213 22:17:38.947299 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.447275375 +0000 UTC m=+57.488613977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.951120 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" event={"ID":"1d7692f7-4101-4c41-86f0-d8c2883110bf","Type":"ContainerStarted","Data":"334bce446e2c7ebe511e55c9345a40b1a89b78e0cbd0580dd3280531635a550f"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.952569 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" event={"ID":"33af49a4-4694-4708-9930-d445c917d6a9","Type":"ContainerStarted","Data":"d024e8199ad493953e72582c40d2c54ff9bdb6a482705f702182b4dea3811ec1"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.953266 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.954535 4866 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7fjbc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.954576 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" podUID="33af49a4-4694-4708-9930-d445c917d6a9" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.955540 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-57fcq" event={"ID":"107bca77-52a4-442c-ae09-ee527f46a9c5","Type":"ContainerStarted","Data":"b33d25566015170bb634c5e0f889ae8bbbbaa96a232a6cbc07f74e7a8554b1b7"} Dec 13 22:17:38 crc kubenswrapper[4866]: I1213 22:17:38.962773 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" event={"ID":"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b","Type":"ContainerStarted","Data":"6a312649d5423149e9764cea028bf98966f85779f79d62b11f7de05b2a12de6f"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.004868 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" event={"ID":"8eda9f75-a124-4cf4-ad0a-358a111fb147","Type":"ContainerStarted","Data":"d6f6885efcec1fc5fdb08e827aaedb85ecd5cb0c88ab87a2c1b08cf5675d7ba8"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.011475 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" event={"ID":"35c963c4-def0-4384-a3bc-856b4cb6ed27","Type":"ContainerStarted","Data":"32f93af297e6d93f086aa2bcd99d73746dda1553511f98b899c90ca81c532e29"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.013103 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" event={"ID":"ae751643-7d61-4b5a-9cc6-d3507a958aee","Type":"ContainerStarted","Data":"142c28bc62fc6c4bcd9644e312c5787f50d199c60e6e6948489890fe8c7b27b8"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.014272 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" event={"ID":"92fa87fc-9ff3-4add-b542-4a936c6d8d7a","Type":"ContainerStarted","Data":"ea03b6bd5449e529bc7f760bf9cb8080dc9517cb8949ae4f78296feffbe99a41"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.015520 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" event={"ID":"37655499-3f2b-496a-a47c-cbf31b9c19ef","Type":"ContainerStarted","Data":"c60795ee5486ce17cb06567db39b35bee6334809da6f86b7db4ff9288aa46f33"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.017331 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" event={"ID":"87bf1879-724a-4a9e-b4c8-d5de32457782","Type":"ContainerStarted","Data":"17a8f9dfce51737c2180f2eee2974082fb55bc6ec64d168a5b34e8b7e4e6f921"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.018436 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" event={"ID":"eb8f2404-0acf-4a0a-a581-b5c767351742","Type":"ContainerStarted","Data":"9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.018891 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.047201 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.048863 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.548851554 +0000 UTC m=+57.590190106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.062196 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" event={"ID":"f607d834-3c1c-45e8-8208-19ed5aca7e95","Type":"ContainerStarted","Data":"2721174367dfe9602fc1de2a7fe38b6c564a05389ccc31a5ee3e32b0605aca01"} Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.066234 4866 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-brrt8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.066272 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.067155 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.067172 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.148275 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.149794 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.649780638 +0000 UTC m=+57.691119190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.203977 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.210635 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dc7kh" podStartSLOduration=8.21061846 podStartE2EDuration="8.21061846s" podCreationTimestamp="2025-12-13 22:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.079397953 +0000 UTC m=+57.120736505" watchObservedRunningTime="2025-12-13 22:17:39.21061846 +0000 UTC m=+57.251957012" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.212829 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podStartSLOduration=8.212823171 podStartE2EDuration="8.212823171s" podCreationTimestamp="2025-12-13 22:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.21104312 +0000 UTC m=+57.252381672" watchObservedRunningTime="2025-12-13 22:17:39.212823171 +0000 UTC m=+57.254161723" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.248312 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:39 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:39 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:39 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.248607 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.249791 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.250164 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.75015302 +0000 UTC m=+57.791491572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.352511 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.352699 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.852686062 +0000 UTC m=+57.894024614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.352767 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.353066 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.85304366 +0000 UTC m=+57.894382212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.435203 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6svzb" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.458269 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.458673 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:39.958656515 +0000 UTC m=+57.999995067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.502438 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" podStartSLOduration=33.502420044 podStartE2EDuration="33.502420044s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.369787764 +0000 UTC m=+57.411126316" watchObservedRunningTime="2025-12-13 22:17:39.502420044 +0000 UTC m=+57.543758596" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.566762 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.567145 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.067129836 +0000 UTC m=+58.108468388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.604426 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" podStartSLOduration=33.604405433 podStartE2EDuration="33.604405433s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.504311608 +0000 UTC m=+57.545650150" watchObservedRunningTime="2025-12-13 22:17:39.604405433 +0000 UTC m=+57.645743985" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.673875 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.674261 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.174245546 +0000 UTC m=+58.215584098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.679029 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" podStartSLOduration=33.679016518 podStartE2EDuration="33.679016518s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.678023335 +0000 UTC m=+57.719361877" watchObservedRunningTime="2025-12-13 22:17:39.679016518 +0000 UTC m=+57.720355070" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.775471 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.775791 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.275777495 +0000 UTC m=+58.317116047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.811908 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9nm2" podStartSLOduration=33.811894654 podStartE2EDuration="33.811894654s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.810226085 +0000 UTC m=+57.851564637" watchObservedRunningTime="2025-12-13 22:17:39.811894654 +0000 UTC m=+57.853233206" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.812836 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" podStartSLOduration=33.812832026 podStartE2EDuration="33.812832026s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.762287677 +0000 UTC m=+57.803626229" watchObservedRunningTime="2025-12-13 22:17:39.812832026 +0000 UTC m=+57.854170568" Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.876519 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.876693 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.376667618 +0000 UTC m=+58.418006170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.876785 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.877206 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.37719265 +0000 UTC m=+58.418531202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.977672 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.977872 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.477848938 +0000 UTC m=+58.519187490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.978208 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:39 crc kubenswrapper[4866]: E1213 22:17:39.978488 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.478471563 +0000 UTC m=+58.519810115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:39 crc kubenswrapper[4866]: I1213 22:17:39.992554 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-57fcq" podStartSLOduration=8.992539154 podStartE2EDuration="8.992539154s" podCreationTimestamp="2025-12-13 22:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:39.914333434 +0000 UTC m=+57.955671986" watchObservedRunningTime="2025-12-13 22:17:39.992539154 +0000 UTC m=+58.033877706" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.078842 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.079118 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.57910313 +0000 UTC m=+58.620441682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.137209 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" event={"ID":"06c68d45-b14f-447c-a680-c68502125d31","Type":"ContainerStarted","Data":"929e847b2be3ddd0299721f76a3c343e9eb0d24ef54cde3c024921a8953f265d"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.182421 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.183315 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.683300631 +0000 UTC m=+58.724639183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.194228 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" event={"ID":"37655499-3f2b-496a-a47c-cbf31b9c19ef","Type":"ContainerStarted","Data":"94b087efdf74bd9b6285be17a69b6ee6020df15d72ecdd70a47acb4b8c1307c8"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.212285 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jksbt" podStartSLOduration=34.212270043 podStartE2EDuration="34.212270043s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:40.211483984 +0000 UTC m=+58.252822536" watchObservedRunningTime="2025-12-13 22:17:40.212270043 +0000 UTC m=+58.253608595" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.229538 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c5wt4" event={"ID":"d8283a63-3511-4508-ae32-3e5ec3488c19","Type":"ContainerStarted","Data":"aca32df6063badf96da4bcd0621fd84555c0e024a7daaa3f0d2e572b8de816fc"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.244414 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:40 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:40 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:40 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.244461 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.244586 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" event={"ID":"f607d834-3c1c-45e8-8208-19ed5aca7e95","Type":"ContainerStarted","Data":"ed01822c5b7de7cacdbae1a3bf3e0ce47a8711e91f54eb9c0b4201b7d77d32c3"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.262778 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h29td" podStartSLOduration=34.262763281 podStartE2EDuration="34.262763281s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:40.262249168 +0000 UTC m=+58.303587720" watchObservedRunningTime="2025-12-13 22:17:40.262763281 +0000 UTC m=+58.304101823" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.275462 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" event={"ID":"c5a9f2a1-3ea0-4386-b3d8-483570ff066d","Type":"ContainerStarted","Data":"ea1e0c231eea8892684c9a87e8967ed3599a42d9a8d16bf24c73bd2712f81d28"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.282670 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ppg68" event={"ID":"6b8290a2-1abc-4343-8aa9-27a3f16f64f7","Type":"ContainerStarted","Data":"fd9e23fb1b1c19e639730323fad75b552dbfa1aafee1a4e34796efd48cc230c9"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.285523 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" event={"ID":"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2","Type":"ContainerStarted","Data":"5d0c6a6d8b719763b61b98ac619fff84f87b396c06202adfce25066e90d43957"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.285550 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" event={"ID":"a5fbc614-c8e5-4ba5-b8c9-30ecec1cefb2","Type":"ContainerStarted","Data":"7e3bf31695adc44fe6c14311fb14a8c2e9f9dc248f03d346c695e327889fd46a"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.286503 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ps9gj" event={"ID":"35c963c4-def0-4384-a3bc-856b4cb6ed27","Type":"ContainerStarted","Data":"8ca066ff0d1056bd283740273d067efa48430d6772cb3edfee272066e1bbdaab"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.288886 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" event={"ID":"92fa87fc-9ff3-4add-b542-4a936c6d8d7a","Type":"ContainerStarted","Data":"56b05f4fc1fb70641a6e1dd56881ccaf0d59a28ac8a69beb5aaf10f232586bdc"} Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.291890 4866 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7fjbc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.291929 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" podUID="33af49a4-4694-4708-9930-d445c917d6a9" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.294423 4866 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-brrt8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.294450 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.294733 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.295841 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.795820338 +0000 UTC m=+58.837158890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.301648 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.302033 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.802018984 +0000 UTC m=+58.843357536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.335675 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c64bv" podStartSLOduration=34.335659495 podStartE2EDuration="34.335659495s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:40.334758484 +0000 UTC m=+58.376097036" watchObservedRunningTime="2025-12-13 22:17:40.335659495 +0000 UTC m=+58.376998047" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.346339 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcftv" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.402271 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.404144 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:40.904124256 +0000 UTC m=+58.945462808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.451396 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-g9k2k" podStartSLOduration=34.451361077 podStartE2EDuration="34.451361077s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:40.378217327 +0000 UTC m=+58.419555889" watchObservedRunningTime="2025-12-13 22:17:40.451361077 +0000 UTC m=+58.492699629" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.504632 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.505284 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.005272105 +0000 UTC m=+59.046610647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.607700 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.608071 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.108038623 +0000 UTC m=+59.149377175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.708708 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.708998 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.208987748 +0000 UTC m=+59.250326300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.775694 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kg5zb"] Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.789799 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rw67t"] Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.790714 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.792903 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.818800 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.819085 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.319040196 +0000 UTC m=+59.360378748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.819341 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.819650 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.319641141 +0000 UTC m=+59.360979693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.820928 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rw67t"] Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.924339 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.924504 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-utilities\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.924533 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-catalog-content\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:40 crc kubenswrapper[4866]: I1213 22:17:40.924565 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wcb\" (UniqueName: \"kubernetes.io/projected/d019a2fd-1864-4c5b-8deb-62c898466850-kube-api-access-h2wcb\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:40 crc kubenswrapper[4866]: E1213 22:17:40.924714 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.424694302 +0000 UTC m=+59.466032854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:40 crc kubenswrapper[4866]: W1213 22:17:40.977157 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-147699c66d3c1dc725305ebbef02e7e6f05e63f8223d532edf448b9cf618cb4f WatchSource:0}: Error finding container 147699c66d3c1dc725305ebbef02e7e6f05e63f8223d532edf448b9cf618cb4f: Status 404 returned error can't find the container with id 147699c66d3c1dc725305ebbef02e7e6f05e63f8223d532edf448b9cf618cb4f Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.026794 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-utilities\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.026836 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-catalog-content\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.026871 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2wcb\" (UniqueName: \"kubernetes.io/projected/d019a2fd-1864-4c5b-8deb-62c898466850-kube-api-access-h2wcb\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.026918 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.027181 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.527170943 +0000 UTC m=+59.568509495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.027473 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-catalog-content\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.027692 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-utilities\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.077569 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2wcb\" (UniqueName: \"kubernetes.io/projected/d019a2fd-1864-4c5b-8deb-62c898466850-kube-api-access-h2wcb\") pod \"community-operators-rw67t\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.128036 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.128187 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.628162118 +0000 UTC m=+59.669500670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.128272 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.128552 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.628545017 +0000 UTC m=+59.669883569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.138032 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.195629 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ppd79"] Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.196487 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.227370 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppd79"] Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.229464 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.229625 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-catalog-content\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.229658 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m888h\" (UniqueName: \"kubernetes.io/projected/9d76fbae-d65b-45df-aee5-0924d2ec35e8-kube-api-access-m888h\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.229702 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.729669146 +0000 UTC m=+59.771007698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.229749 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-utilities\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.235676 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:41 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:41 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:41 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.235736 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.289159 4866 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8p9hb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.289211 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" podUID="ff0c44c6-31b2-46c3-a5b7-51c10ed59236" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.297198 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"147699c66d3c1dc725305ebbef02e7e6f05e63f8223d532edf448b9cf618cb4f"} Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.303353 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" event={"ID":"ae751643-7d61-4b5a-9cc6-d3507a958aee","Type":"ContainerStarted","Data":"cfe9569f37d745eadf8d3de4681139e770920480bb5f99c109b437f90b1cff25"} Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.308464 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"815f56cec608b6c5a5e326790afeec9de7797f4fcef4de856cb8463c176712b2"} Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.310520 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"81806035bd43383b91e39f555f86d1e4c2522f1a3333d694eca364c1537b8801"} Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.331401 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-utilities\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.331459 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.331481 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-catalog-content\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.331516 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m888h\" (UniqueName: \"kubernetes.io/projected/9d76fbae-d65b-45df-aee5-0924d2ec35e8-kube-api-access-m888h\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.331868 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.83184591 +0000 UTC m=+59.873184462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.332201 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-utilities\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.332388 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-catalog-content\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.430181 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m888h\" (UniqueName: \"kubernetes.io/projected/9d76fbae-d65b-45df-aee5-0924d2ec35e8-kube-api-access-m888h\") pod \"community-operators-ppd79\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.433870 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.434765 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:41.934737779 +0000 UTC m=+59.976076341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.465347 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fjbc" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.512710 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.535772 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.536228 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.036211536 +0000 UTC m=+60.077550088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.637635 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.638062 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.138024321 +0000 UTC m=+60.179362873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.751254 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.751810 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.251799068 +0000 UTC m=+60.293137620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.860658 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.861033 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.361017187 +0000 UTC m=+60.402355729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:41 crc kubenswrapper[4866]: I1213 22:17:41.961873 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:41 crc kubenswrapper[4866]: E1213 22:17:41.962234 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.462220248 +0000 UTC m=+60.503558800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.048882 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8p9hb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.064948 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.065151 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.565119249 +0000 UTC m=+60.606457811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.065221 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.065559 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.565547269 +0000 UTC m=+60.606885821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.236527 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.241355 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.741333544 +0000 UTC m=+60.782672096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.241550 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:42 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:42 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:42 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.241583 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.300526 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rw67t"] Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.350735 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.351011 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.851000714 +0000 UTC m=+60.892339266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.412503 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" event={"ID":"1d7692f7-4101-4c41-86f0-d8c2883110bf","Type":"ContainerStarted","Data":"f07fcd1e4ddc10152f84262aab6baada1d98c43e40c88fb97b5edf3dd2f40d2c"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.441863 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" event={"ID":"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b","Type":"ContainerStarted","Data":"fa9b3ffde788ff92fd30bf2f11abbb113cda90b1f104f93068f7242657574d1c"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.451494 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.452135 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:42.952119622 +0000 UTC m=+60.993458174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.470463 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" event={"ID":"a3f3a62c-f055-4e72-8b85-47b8d577ca3a","Type":"ContainerStarted","Data":"b98df9f26ded3718463fef9a871c3c0b6a3fba4048fefc3a8e316926d9890948"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.495687 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" event={"ID":"87bf1879-724a-4a9e-b4c8-d5de32457782","Type":"ContainerStarted","Data":"2771fdbf3df071939f839d24c49e1f02ab21bb3448b775dbc029183f80e6ea6b"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.502879 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerStarted","Data":"ae6c071ec73ad9491f98a3d0f12668034fb1297bc2d38971c35b83f7b60d9d46"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.509834 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" event={"ID":"538f1baf-acb2-4a08-9baf-2d710def7477","Type":"ContainerStarted","Data":"317b2ec7399cd7639d4f759351a2ef3baee56c9c42b40baa87f0d437eb70d393"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.542323 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a1ee147661a87e731fbaf619c1ebc123ce0209e6bd177ece910fa6e4d2bdf0bf"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.553879 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.555072 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.055030333 +0000 UTC m=+61.096368885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.588544 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" event={"ID":"f607d834-3c1c-45e8-8208-19ed5aca7e95","Type":"ContainerStarted","Data":"6cf7531792ea36c7a919d3cd6ab3ba182795c74faeeb1e53814a061060e767a5"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.629915 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sj6bb"] Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.630869 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.637975 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" event={"ID":"0771c5ba-de65-4932-a37e-b21a2337f265","Type":"ContainerStarted","Data":"927fc1f7bbfa21e8fdfd0d1558b8b99725942499c9d7cc1ef7dfa3421d9bf453"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.656842 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.658773 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.660932 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj6bb"] Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.672373 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d6d0645c20faff1d9a4a19b18a6a9998557638d588313a95f97c982ed204ca75"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.672470 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.661399 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.161374045 +0000 UTC m=+61.202712597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.696786 4866 generic.go:334] "Generic (PLEG): container finished" podID="9055b773-45d4-411e-a3cf-f160e940b102" containerID="9c3b4a2eae11ece8dc0a852479597f798fc4ab8cb8821bb8cfcfdfc583f47d6b" exitCode=0 Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.696849 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" event={"ID":"9055b773-45d4-411e-a3cf-f160e940b102","Type":"ContainerDied","Data":"9c3b4a2eae11ece8dc0a852479597f798fc4ab8cb8821bb8cfcfdfc583f47d6b"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.698301 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c5wt4" event={"ID":"d8283a63-3511-4508-ae32-3e5ec3488c19","Type":"ContainerStarted","Data":"9617766b0ef55860c16a5159bf0be7f16fe40a958c526a513cabc83d6accdfad"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.698516 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.714884 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" event={"ID":"94716d23-2314-47b4-8159-f1f2f970c989","Type":"ContainerStarted","Data":"bd9570ae60496b549c24a64bc0ee64c59b33c13e835704ac9b607790cdc95b63"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.748159 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" event={"ID":"e110a18a-ce3e-4c58-b80d-9f8c9018a868","Type":"ContainerStarted","Data":"37dec9e8ed8b21a31e00d9c562cbe06f3a261d993e18bd55fdeef3a6d19e7087"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.783709 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-utilities\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.784105 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.784226 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trz64\" (UniqueName: \"kubernetes.io/projected/a61d5864-61a4-46a9-a4f5-020d4ed879cd-kube-api-access-trz64\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.784356 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-catalog-content\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.784700 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.284688316 +0000 UTC m=+61.326026868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.784846 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b96ba5cc315308720622e6498c7d978626ed97a7d5c7c9f85c5041ab3864f764"} Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.785363 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.785678 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" gracePeriod=30 Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.885679 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.885947 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-catalog-content\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.886026 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.385998449 +0000 UTC m=+61.427337001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.886178 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-utilities\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.886448 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.886491 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trz64\" (UniqueName: \"kubernetes.io/projected/a61d5864-61a4-46a9-a4f5-020d4ed879cd-kube-api-access-trz64\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.887418 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.387409832 +0000 UTC m=+61.428748384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.888380 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-catalog-content\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.889308 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-utilities\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.933893 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trz64\" (UniqueName: \"kubernetes.io/projected/a61d5864-61a4-46a9-a4f5-020d4ed879cd-kube-api-access-trz64\") pod \"certified-operators-sj6bb\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.977780 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dmd7t" podStartSLOduration=36.977764598 podStartE2EDuration="36.977764598s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:42.975917954 +0000 UTC m=+61.017256506" watchObservedRunningTime="2025-12-13 22:17:42.977764598 +0000 UTC m=+61.019103150" Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.987832 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:42 crc kubenswrapper[4866]: E1213 22:17:42.988211 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.488192263 +0000 UTC m=+61.529530815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:42 crc kubenswrapper[4866]: I1213 22:17:42.991339 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.066744 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwcm4"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.067996 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.070813 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwcm4"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.093105 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.093400 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.593389298 +0000 UTC m=+61.634727850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: W1213 22:17:43.155320 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d76fbae_d65b_45df_aee5_0924d2ec35e8.slice/crio-d8ec1824da967253ebc4c0ca44384ad6ee5cc096781c99e91564f62fdc570610 WatchSource:0}: Error finding container d8ec1824da967253ebc4c0ca44384ad6ee5cc096781c99e91564f62fdc570610: Status 404 returned error can't find the container with id d8ec1824da967253ebc4c0ca44384ad6ee5cc096781c99e91564f62fdc570610 Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.171470 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppd79"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.182899 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nvcf" podStartSLOduration=37.182883193 podStartE2EDuration="37.182883193s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:43.164301796 +0000 UTC m=+61.205640338" watchObservedRunningTime="2025-12-13 22:17:43.182883193 +0000 UTC m=+61.224221745" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.193829 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.194006 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.693982004 +0000 UTC m=+61.735320556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.194092 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-utilities\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.194136 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.194166 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-catalog-content\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.194196 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7b7\" (UniqueName: \"kubernetes.io/projected/bb228672-927d-42e6-bcde-d5733629cae2-kube-api-access-6f7b7\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.194513 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.694501716 +0000 UTC m=+61.735840258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.237587 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:43 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:43 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:43 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.237644 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.295099 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.295205 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.795187635 +0000 UTC m=+61.836526187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.295606 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.295641 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-catalog-content\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.295678 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7b7\" (UniqueName: \"kubernetes.io/projected/bb228672-927d-42e6-bcde-d5733629cae2-kube-api-access-6f7b7\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.295729 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-utilities\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.296211 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.796188268 +0000 UTC m=+61.837526820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.296502 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-catalog-content\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.296528 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-utilities\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.396642 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.396988 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.896959739 +0000 UTC m=+61.938298291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.416471 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7b7\" (UniqueName: \"kubernetes.io/projected/bb228672-927d-42e6-bcde-d5733629cae2-kube-api-access-6f7b7\") pod \"certified-operators-cwcm4\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.418694 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c5wt4" podStartSLOduration=12.41867773 podStartE2EDuration="12.41867773s" podCreationTimestamp="2025-12-13 22:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:43.308329034 +0000 UTC m=+61.349667586" watchObservedRunningTime="2025-12-13 22:17:43.41867773 +0000 UTC m=+61.460016282" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.498407 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.498843 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:43.998826655 +0000 UTC m=+62.040165207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.599624 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.599920 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.099906553 +0000 UTC m=+62.141245105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.605821 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" podStartSLOduration=37.605801372 podStartE2EDuration="37.605801372s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:43.559359899 +0000 UTC m=+61.600698451" watchObservedRunningTime="2025-12-13 22:17:43.605801372 +0000 UTC m=+61.647139924" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.607803 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6hxqc"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.608806 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.639967 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.701299 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzqx\" (UniqueName: \"kubernetes.io/projected/902bee05-89e6-48b6-becf-d715d04dd8cd-kube-api-access-hmzqx\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.701359 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-catalog-content\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.701404 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.701430 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-utilities\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.701807 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.20178058 +0000 UTC m=+62.243119132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.708834 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.741090 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.741720 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.762640 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5247b" podStartSLOduration=37.762622641 podStartE2EDuration="37.762622641s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:43.753718351 +0000 UTC m=+61.795056903" watchObservedRunningTime="2025-12-13 22:17:43.762622641 +0000 UTC m=+61.803961193" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.776860 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.778817 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.784555 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.802801 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.803165 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzqx\" (UniqueName: \"kubernetes.io/projected/902bee05-89e6-48b6-becf-d715d04dd8cd-kube-api-access-hmzqx\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.803195 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-catalog-content\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.803241 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-utilities\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.803643 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.303625195 +0000 UTC m=+62.344963747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.804582 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-catalog-content\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.804863 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-utilities\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.824214 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdd5b" event={"ID":"1d7692f7-4101-4c41-86f0-d8c2883110bf","Type":"ContainerStarted","Data":"60148475bb4546fdbb30b3245b55467fb34a52c272055ed5553744afabee598c"} Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.825032 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hxqc"] Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.849577 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" event={"ID":"538f1baf-acb2-4a08-9baf-2d710def7477","Type":"ContainerStarted","Data":"66d2fc486a969cc135eca7691d257902e3a8fc9a8e9c4cbd07e41de831d081b4"} Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.856324 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerStarted","Data":"d8ec1824da967253ebc4c0ca44384ad6ee5cc096781c99e91564f62fdc570610"} Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.870839 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" event={"ID":"474a926a-06bd-4d6e-bd60-5cfd5a1fa37b","Type":"ContainerStarted","Data":"73d75e3160dbdf66f80ed96f3d9d48cfe467ec9b7b06b68670c37c8a00b4f293"} Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.872293 4866 generic.go:334] "Generic (PLEG): container finished" podID="d019a2fd-1864-4c5b-8deb-62c898466850" containerID="856bf9dace1a46b85e52c79071387a8e472a15e3e72eaebf8f1b9e7e509990d8" exitCode=0 Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.873215 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerDied","Data":"856bf9dace1a46b85e52c79071387a8e472a15e3e72eaebf8f1b9e7e509990d8"} Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.874941 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.874972 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.875139 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.875191 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.875552 4866 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.900653 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzqx\" (UniqueName: \"kubernetes.io/projected/902bee05-89e6-48b6-becf-d715d04dd8cd-kube-api-access-hmzqx\") pod \"redhat-marketplace-6hxqc\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.908593 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35219f83-5e99-476a-9f5d-979f0739127d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.908652 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.908705 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35219f83-5e99-476a-9f5d-979f0739127d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:43 crc kubenswrapper[4866]: E1213 22:17:43.909002 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.408992074 +0000 UTC m=+62.450330616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:43 crc kubenswrapper[4866]: I1213 22:17:43.927424 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.011618 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" podStartSLOduration=38.011602028 podStartE2EDuration="38.011602028s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:44.011419314 +0000 UTC m=+62.052757866" watchObservedRunningTime="2025-12-13 22:17:44.011602028 +0000 UTC m=+62.052940580" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.012329 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.012649 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35219f83-5e99-476a-9f5d-979f0739127d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.012835 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35219f83-5e99-476a-9f5d-979f0739127d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.013786 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.513770789 +0000 UTC m=+62.555109341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.045444 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35219f83-5e99-476a-9f5d-979f0739127d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.094149 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ls9kr"] Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.096416 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.100226 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw6xl" podStartSLOduration=38.100206302 podStartE2EDuration="38.100206302s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:44.070946634 +0000 UTC m=+62.112285176" watchObservedRunningTime="2025-12-13 22:17:44.100206302 +0000 UTC m=+62.141544844" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.117940 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.118312 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.618299048 +0000 UTC m=+62.659637600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.160748 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ls9kr"] Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.160792 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.160804 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.164919 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vvnn7" podStartSLOduration=39.164906934 podStartE2EDuration="39.164906934s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:44.157981331 +0000 UTC m=+62.199319873" watchObservedRunningTime="2025-12-13 22:17:44.164906934 +0000 UTC m=+62.206245486" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.166194 4866 patch_prober.go:28] interesting pod/console-f9d7485db-zp55r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.166250 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zp55r" podUID="0f6a72e7-f4e0-4628-91ad-c3f81514f9f9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.167879 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.167905 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.178395 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35219f83-5e99-476a-9f5d-979f0739127d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.190278 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.206721 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-72w7j" podStartSLOduration=38.206705318 podStartE2EDuration="38.206705318s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:44.204902145 +0000 UTC m=+62.246240697" watchObservedRunningTime="2025-12-13 22:17:44.206705318 +0000 UTC m=+62.248043870" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.220701 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.220900 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.720875521 +0000 UTC m=+62.762214073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.220981 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztd2\" (UniqueName: \"kubernetes.io/projected/bfe21f96-9dfe-4862-8010-11559fa1c2b4-kube-api-access-tztd2\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.221010 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-catalog-content\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.221037 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.221248 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-utilities\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.221311 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.721299151 +0000 UTC m=+62.762637703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.237951 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:44 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:44 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:44 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.238013 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.238688 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.263538 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj6bb"] Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.322582 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.322842 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.822819079 +0000 UTC m=+62.864157631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.323038 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztd2\" (UniqueName: \"kubernetes.io/projected/bfe21f96-9dfe-4862-8010-11559fa1c2b4-kube-api-access-tztd2\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.323125 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-catalog-content\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.323187 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.323421 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-utilities\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.324478 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.824462488 +0000 UTC m=+62.865801040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.327085 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-catalog-content\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.327763 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-utilities\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.327784 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" podStartSLOduration=39.327771846 podStartE2EDuration="39.327771846s" podCreationTimestamp="2025-12-13 22:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:44.291880511 +0000 UTC m=+62.333219063" watchObservedRunningTime="2025-12-13 22:17:44.327771846 +0000 UTC m=+62.369110398" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.349219 4866 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d76fbae_d65b_45df_aee5_0924d2ec35e8.slice/crio-d4d8923b2fdeb373960fcd4407dddd029e4eaa9a2f22975f1e85705d4f8fb4e6.scope\": RecentStats: unable to find data in memory cache]" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.363268 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sdd5b" podStartSLOduration=38.36325056 podStartE2EDuration="38.36325056s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:44.361309505 +0000 UTC m=+62.402648057" watchObservedRunningTime="2025-12-13 22:17:44.36325056 +0000 UTC m=+62.404589112" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.379537 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.400789 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztd2\" (UniqueName: \"kubernetes.io/projected/bfe21f96-9dfe-4862-8010-11559fa1c2b4-kube-api-access-tztd2\") pod \"redhat-marketplace-ls9kr\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.426897 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.427291 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:44.927276246 +0000 UTC m=+62.968614798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.460970 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.530634 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.530964 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.030953545 +0000 UTC m=+63.072292097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.531598 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.632612 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.633826 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.133809945 +0000 UTC m=+63.175148497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.737004 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.737369 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.237353771 +0000 UTC m=+63.278692313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.848155 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.848419 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.348395133 +0000 UTC m=+63.389733685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.848595 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.849001 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.348977617 +0000 UTC m=+63.390316169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.863453 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.863502 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.890092 4866 generic.go:334] "Generic (PLEG): container finished" podID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerID="d4d8923b2fdeb373960fcd4407dddd029e4eaa9a2f22975f1e85705d4f8fb4e6" exitCode=0 Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.890167 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerDied","Data":"d4d8923b2fdeb373960fcd4407dddd029e4eaa9a2f22975f1e85705d4f8fb4e6"} Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.893323 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" event={"ID":"94716d23-2314-47b4-8159-f1f2f970c989","Type":"ContainerStarted","Data":"1273fe246cab216bdd25e5ac1cc1c5ea02e2e91b05e83dd30ec3a0c89d38f4a8"} Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.901230 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerStarted","Data":"eb0376c50b4239a24c4fab0067448a65ba24e9f09e6571e47f2c2c9d0c157e81"} Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.923423 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qcsh9" Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.949590 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:44 crc kubenswrapper[4866]: E1213 22:17:44.950789 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.45075981 +0000 UTC m=+63.492098352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:44 crc kubenswrapper[4866]: I1213 22:17:44.970918 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hxqc"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.051191 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.051694 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.551683494 +0000 UTC m=+63.593022046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.057846 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.062332 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.101180 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.101256 4866 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.140594 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.154748 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.155152 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.655133438 +0000 UTC m=+63.696471990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.255696 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9055b773-45d4-411e-a3cf-f160e940b102-secret-volume\") pod \"9055b773-45d4-411e-a3cf-f160e940b102\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.256236 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmjj4\" (UniqueName: \"kubernetes.io/projected/9055b773-45d4-411e-a3cf-f160e940b102-kube-api-access-hmjj4\") pod \"9055b773-45d4-411e-a3cf-f160e940b102\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.256299 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9055b773-45d4-411e-a3cf-f160e940b102-config-volume\") pod \"9055b773-45d4-411e-a3cf-f160e940b102\" (UID: \"9055b773-45d4-411e-a3cf-f160e940b102\") " Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.256499 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.256885 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.756869741 +0000 UTC m=+63.798208293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.261777 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9055b773-45d4-411e-a3cf-f160e940b102-config-volume" (OuterVolumeSpecName: "config-volume") pod "9055b773-45d4-411e-a3cf-f160e940b102" (UID: "9055b773-45d4-411e-a3cf-f160e940b102"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.263297 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:45 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:45 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:45 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.263346 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.273717 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9055b773-45d4-411e-a3cf-f160e940b102-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9055b773-45d4-411e-a3cf-f160e940b102" (UID: "9055b773-45d4-411e-a3cf-f160e940b102"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.285147 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9055b773-45d4-411e-a3cf-f160e940b102-kube-api-access-hmjj4" (OuterVolumeSpecName: "kube-api-access-hmjj4") pod "9055b773-45d4-411e-a3cf-f160e940b102" (UID: "9055b773-45d4-411e-a3cf-f160e940b102"). InnerVolumeSpecName "kube-api-access-hmjj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.365937 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwcm4"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.370514 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.370831 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmjj4\" (UniqueName: \"kubernetes.io/projected/9055b773-45d4-411e-a3cf-f160e940b102-kube-api-access-hmjj4\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.370842 4866 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9055b773-45d4-411e-a3cf-f160e940b102-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.370850 4866 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9055b773-45d4-411e-a3cf-f160e940b102-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.370915 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.870899523 +0000 UTC m=+63.912238075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.408766 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdvwj"] Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.408991 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9055b773-45d4-411e-a3cf-f160e940b102" containerName="collect-profiles" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.409022 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="9055b773-45d4-411e-a3cf-f160e940b102" containerName="collect-profiles" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.409172 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="9055b773-45d4-411e-a3cf-f160e940b102" containerName="collect-profiles" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.409810 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.415897 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.421217 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdvwj"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.477679 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-utilities\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.477724 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-catalog-content\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.477752 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wcf\" (UniqueName: \"kubernetes.io/projected/1a29082d-49a1-4625-9d9b-568ef75773c8-kube-api-access-d7wcf\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.477817 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.478168 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:45.978152526 +0000 UTC m=+64.019491078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.578795 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.579335 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-utilities\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.579362 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-catalog-content\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.579397 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wcf\" (UniqueName: \"kubernetes.io/projected/1a29082d-49a1-4625-9d9b-568ef75773c8-kube-api-access-d7wcf\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.579890 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-utilities\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.579984 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.079966392 +0000 UTC m=+64.121304944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.580207 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-catalog-content\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.610387 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wcf\" (UniqueName: \"kubernetes.io/projected/1a29082d-49a1-4625-9d9b-568ef75773c8-kube-api-access-d7wcf\") pod \"redhat-operators-zdvwj\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.681542 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.681734 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.682184 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.182171596 +0000 UTC m=+64.223510148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.767541 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.783484 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.784272 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.284255657 +0000 UTC m=+64.325594209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.809878 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9rw56"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.813378 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.886425 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-catalog-content\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.886476 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6km9\" (UniqueName: \"kubernetes.io/projected/b87d7d9b-aff1-45c9-8824-35fe7442cc07-kube-api-access-w6km9\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.886554 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.886575 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-utilities\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.886929 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.386902622 +0000 UTC m=+64.428241174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.915631 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.916304 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.923480 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.923675 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.966928 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35219f83-5e99-476a-9f5d-979f0739127d","Type":"ContainerStarted","Data":"6b8729c32e1f0183f8459190ebae931d9f9f372b54a1ca2d050c5519def50994"} Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.969723 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9rw56"] Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.970866 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerStarted","Data":"85a8fab3dc86ed7d029b9e356a0c53a789412d2290c3588cf26c46d37e71df2e"} Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.989738 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.989974 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-utilities\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: E1213 22:17:45.990034 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.490014567 +0000 UTC m=+64.531353119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.990195 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-catalog-content\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.990231 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6km9\" (UniqueName: \"kubernetes.io/projected/b87d7d9b-aff1-45c9-8824-35fe7442cc07-kube-api-access-w6km9\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.990257 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.990274 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.990355 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-utilities\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.990564 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-catalog-content\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.996365 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" event={"ID":"9055b773-45d4-411e-a3cf-f160e940b102","Type":"ContainerDied","Data":"421b4b4761e62cd617db416c39082a54a14684cc1f3d3e9423ac36fa24487316"} Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.996670 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421b4b4761e62cd617db416c39082a54a14684cc1f3d3e9423ac36fa24487316" Dec 13 22:17:45 crc kubenswrapper[4866]: I1213 22:17:45.996777 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427735-h22d8" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.033442 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hxqc" event={"ID":"902bee05-89e6-48b6-becf-d715d04dd8cd","Type":"ContainerStarted","Data":"e7ca2401eeff7540afa03fc60eb34f3e64350ccfc3a584d526e387810be2845d"} Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.042152 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ls9kr"] Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.063895 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerStarted","Data":"091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf"} Dec 13 22:17:46 crc kubenswrapper[4866]: W1213 22:17:46.065758 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfe21f96_9dfe_4862_8010_11559fa1c2b4.slice/crio-9c35ce655382e6f2c201c991306d41d1e455a8419b6b5f852baa74d837ceebfb WatchSource:0}: Error finding container 9c35ce655382e6f2c201c991306d41d1e455a8419b6b5f852baa74d837ceebfb: Status 404 returned error can't find the container with id 9c35ce655382e6f2c201c991306d41d1e455a8419b6b5f852baa74d837ceebfb Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.083743 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6km9\" (UniqueName: \"kubernetes.io/projected/b87d7d9b-aff1-45c9-8824-35fe7442cc07-kube-api-access-w6km9\") pod \"redhat-operators-9rw56\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.091449 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.091632 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.091740 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.092027 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.092194 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.592182911 +0000 UTC m=+64.633521463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.101293 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.136631 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.176807 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.193394 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.194479 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.694465787 +0000 UTC m=+64.735804339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.243229 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:46 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:46 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:46 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.243516 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.291358 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.294822 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.295245 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.795229657 +0000 UTC m=+64.836568209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.373099 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdvwj"] Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.395582 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.395696 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.8956782 +0000 UTC m=+64.937016752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.396307 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.896297295 +0000 UTC m=+64.937635847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.396038 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.497877 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.497978 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.997962257 +0000 UTC m=+65.039300809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.498552 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:46.99854363 +0000 UTC m=+65.039882172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.498263 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.599783 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.599993 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.099968886 +0000 UTC m=+65.141307428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.600127 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.600371 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.100365236 +0000 UTC m=+65.141703788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:46 crc kubenswrapper[4866]: I1213 22:17:46.700713 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:46 crc kubenswrapper[4866]: E1213 22:17:46.992359 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.492337396 +0000 UTC m=+65.533675948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.004621 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.004934 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.504923973 +0000 UTC m=+65.546262525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.086740 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerStarted","Data":"c244c0242cadbe3aa7813ccc93bdf123aaf8dd49856d75462500a4a4a3e1fb24"} Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.096311 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerStarted","Data":"9c35ce655382e6f2c201c991306d41d1e455a8419b6b5f852baa74d837ceebfb"} Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.106797 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.107169 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.607155867 +0000 UTC m=+65.648494419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.203703 4866 patch_prober.go:28] interesting pod/apiserver-76f77b778f-wj4kw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]log ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]etcd ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/generic-apiserver-start-informers ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/max-in-flight-filter ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 13 22:17:47 crc kubenswrapper[4866]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 13 22:17:47 crc kubenswrapper[4866]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/project.openshift.io-projectcache ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/openshift.io-startinformers ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 13 22:17:47 crc kubenswrapper[4866]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 13 22:17:47 crc kubenswrapper[4866]: livez check failed Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.204354 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" podUID="538f1baf-acb2-4a08-9baf-2d710def7477" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.207923 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.208296 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.708284556 +0000 UTC m=+65.749623108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.242223 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:47 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:47 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:47 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.242279 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.308905 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.309231 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.809216431 +0000 UTC m=+65.850554983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.409481 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.411799 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.412592 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:47.912579692 +0000 UTC m=+65.953918244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.515949 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.516775 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.016755743 +0000 UTC m=+66.058094295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.600215 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9rw56"] Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.620076 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.620453 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.120441582 +0000 UTC m=+66.161780134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.722512 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.722689 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.222668267 +0000 UTC m=+66.264006819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.723096 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.723406 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.223394494 +0000 UTC m=+66.264733046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.824456 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.824636 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.324620705 +0000 UTC m=+66.365959257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.824688 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.824993 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.324978324 +0000 UTC m=+66.366316876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:47 crc kubenswrapper[4866]: I1213 22:17:47.926077 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:47 crc kubenswrapper[4866]: E1213 22:17:47.926387 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.426373529 +0000 UTC m=+66.467712081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.027493 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.027829 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.527810235 +0000 UTC m=+66.569148787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.100924 4866 generic.go:334] "Generic (PLEG): container finished" podID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerID="491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d" exitCode=0 Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.100987 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerDied","Data":"491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.102096 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5bef92ad-0cb2-491e-a43d-cb01b5acd441","Type":"ContainerStarted","Data":"d9fe92e1a1b687c204e7d4e53f74661d4d0451d20752211994025c733a4ae6cc"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.103036 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerStarted","Data":"509f72226cd4936a771e43a368ed311532cc4d1b2c63c04cbdd8e9043561f499"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.105334 4866 generic.go:334] "Generic (PLEG): container finished" podID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerID="091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf" exitCode=0 Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.105368 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerDied","Data":"091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.108917 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35219f83-5e99-476a-9f5d-979f0739127d","Type":"ContainerStarted","Data":"241b04da837cc2bc713f4d15b8efc3b2151b3a5a472fdb2da9b0174dccc9efa2"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.112295 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerStarted","Data":"c50f2b389e9a83ad67450858e72445525e8e1bcc8c057cbbcf672e1dfd84929e"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.113340 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerStarted","Data":"e052c771b7a4945c01a0c7681b5e54b7a85de6e5393512a2856ceafc790d0176"} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.129132 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.129254 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.629236401 +0000 UTC m=+66.670574953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.129405 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.129682 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.629674261 +0000 UTC m=+66.671012813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.230394 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.230800 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.73078602 +0000 UTC m=+66.772124562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.234678 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:48 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:48 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:48 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.234713 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.332002 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.332311 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.832300598 +0000 UTC m=+66.873639150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.433460 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.433814 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:48.933800436 +0000 UTC m=+66.975138988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.534930 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.535311 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:49.035297032 +0000 UTC m=+67.076635574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.630610 4866 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.645817 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.646255 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:49.146237572 +0000 UTC m=+67.187576324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.750511 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.751530 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 22:17:49.251513069 +0000 UTC m=+67.292851621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wzf86" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.853381 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:48 crc kubenswrapper[4866]: E1213 22:17:48.853743 4866 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 22:17:49.353711043 +0000 UTC m=+67.395049625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.872204 4866 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-13T22:17:48.630631985Z","Handler":null,"Name":""} Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.876213 4866 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.876278 4866 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 13 22:17:48 crc kubenswrapper[4866]: I1213 22:17:48.960934 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.051116 4866 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.051170 4866 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.128938 4866 generic.go:334] "Generic (PLEG): container finished" podID="bb228672-927d-42e6-bcde-d5733629cae2" containerID="c50f2b389e9a83ad67450858e72445525e8e1bcc8c057cbbcf672e1dfd84929e" exitCode=0 Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.129288 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerDied","Data":"c50f2b389e9a83ad67450858e72445525e8e1bcc8c057cbbcf672e1dfd84929e"} Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.137877 4866 generic.go:334] "Generic (PLEG): container finished" podID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerID="e052c771b7a4945c01a0c7681b5e54b7a85de6e5393512a2856ceafc790d0176" exitCode=0 Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.137954 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerDied","Data":"e052c771b7a4945c01a0c7681b5e54b7a85de6e5393512a2856ceafc790d0176"} Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.153784 4866 generic.go:334] "Generic (PLEG): container finished" podID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerID="af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa" exitCode=0 Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.154091 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hxqc" event={"ID":"902bee05-89e6-48b6-becf-d715d04dd8cd","Type":"ContainerDied","Data":"af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa"} Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.204086 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wzf86\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.214389 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.223896 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.234564 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:49 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:49 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:49 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.234615 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.263573 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.278526 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.481115 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.481099172 podStartE2EDuration="6.481099172s" podCreationTimestamp="2025-12-13 22:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:49.210722951 +0000 UTC m=+67.252061513" watchObservedRunningTime="2025-12-13 22:17:49.481099172 +0000 UTC m=+67.522437724" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.482836 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wzf86"] Dec 13 22:17:49 crc kubenswrapper[4866]: W1213 22:17:49.505733 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe29c2e_47b8_434d_a38f_8edd2992e345.slice/crio-0f0794a222da61b0ca55054cd1bf08ff618259152d6b606bca4f07d983207fbe WatchSource:0}: Error finding container 0f0794a222da61b0ca55054cd1bf08ff618259152d6b606bca4f07d983207fbe: Status 404 returned error can't find the container with id 0f0794a222da61b0ca55054cd1bf08ff618259152d6b606bca4f07d983207fbe Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.867752 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.872240 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wj4kw" Dec 13 22:17:49 crc kubenswrapper[4866]: I1213 22:17:49.913196 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c5wt4" Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.174951 4866 generic.go:334] "Generic (PLEG): container finished" podID="5bef92ad-0cb2-491e-a43d-cb01b5acd441" containerID="87eced118f4f2e8596a89eb3656db941c63b8ee0e18bb03c4951c3381037da1e" exitCode=0 Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.175106 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5bef92ad-0cb2-491e-a43d-cb01b5acd441","Type":"ContainerDied","Data":"87eced118f4f2e8596a89eb3656db941c63b8ee0e18bb03c4951c3381037da1e"} Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.186721 4866 generic.go:334] "Generic (PLEG): container finished" podID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerID="0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec" exitCode=0 Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.186806 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerDied","Data":"0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec"} Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.207366 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" event={"ID":"94716d23-2314-47b4-8159-f1f2f970c989","Type":"ContainerStarted","Data":"2f588a28508faa6c0a0d40209cecf5a181dc927da237bcd09105b7d3b21564e0"} Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.214692 4866 generic.go:334] "Generic (PLEG): container finished" podID="35219f83-5e99-476a-9f5d-979f0739127d" containerID="241b04da837cc2bc713f4d15b8efc3b2151b3a5a472fdb2da9b0174dccc9efa2" exitCode=0 Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.225741 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.226702 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35219f83-5e99-476a-9f5d-979f0739127d","Type":"ContainerDied","Data":"241b04da837cc2bc713f4d15b8efc3b2151b3a5a472fdb2da9b0174dccc9efa2"} Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.226744 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" event={"ID":"6fe29c2e-47b8-434d-a38f-8edd2992e345","Type":"ContainerStarted","Data":"0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f"} Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.226758 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" event={"ID":"6fe29c2e-47b8-434d-a38f-8edd2992e345","Type":"ContainerStarted","Data":"0f0794a222da61b0ca55054cd1bf08ff618259152d6b606bca4f07d983207fbe"} Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.235072 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:50 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:50 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:50 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.235140 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:50 crc kubenswrapper[4866]: I1213 22:17:50.320925 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" podStartSLOduration=44.320909575 podStartE2EDuration="44.320909575s" podCreationTimestamp="2025-12-13 22:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:50.272188144 +0000 UTC m=+68.313526716" watchObservedRunningTime="2025-12-13 22:17:50.320909575 +0000 UTC m=+68.362248127" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.228483 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.236449 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:51 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:51 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:51 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.236612 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.497505 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.566805 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.617366 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kubelet-dir\") pod \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.617445 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kube-api-access\") pod \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\" (UID: \"5bef92ad-0cb2-491e-a43d-cb01b5acd441\") " Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.617596 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5bef92ad-0cb2-491e-a43d-cb01b5acd441" (UID: "5bef92ad-0cb2-491e-a43d-cb01b5acd441"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.617936 4866 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.622719 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5bef92ad-0cb2-491e-a43d-cb01b5acd441" (UID: "5bef92ad-0cb2-491e-a43d-cb01b5acd441"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.718380 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35219f83-5e99-476a-9f5d-979f0739127d-kube-api-access\") pod \"35219f83-5e99-476a-9f5d-979f0739127d\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.718429 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35219f83-5e99-476a-9f5d-979f0739127d-kubelet-dir\") pod \"35219f83-5e99-476a-9f5d-979f0739127d\" (UID: \"35219f83-5e99-476a-9f5d-979f0739127d\") " Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.718695 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35219f83-5e99-476a-9f5d-979f0739127d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35219f83-5e99-476a-9f5d-979f0739127d" (UID: "35219f83-5e99-476a-9f5d-979f0739127d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.718772 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bef92ad-0cb2-491e-a43d-cb01b5acd441-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.732963 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35219f83-5e99-476a-9f5d-979f0739127d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35219f83-5e99-476a-9f5d-979f0739127d" (UID: "35219f83-5e99-476a-9f5d-979f0739127d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.819582 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35219f83-5e99-476a-9f5d-979f0739127d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:51 crc kubenswrapper[4866]: I1213 22:17:51.819806 4866 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35219f83-5e99-476a-9f5d-979f0739127d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.236329 4866 patch_prober.go:28] interesting pod/router-default-5444994796-wbx6k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 22:17:52 crc kubenswrapper[4866]: [-]has-synced failed: reason withheld Dec 13 22:17:52 crc kubenswrapper[4866]: [+]process-running ok Dec 13 22:17:52 crc kubenswrapper[4866]: healthz check failed Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.236400 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wbx6k" podUID="f12ed525-d75a-402f-b6c5-ce6298cb98f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.245262 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" event={"ID":"94716d23-2314-47b4-8159-f1f2f970c989","Type":"ContainerStarted","Data":"cf72cba049922f8d30bb89b7c54d19c3dbdc6164465a9a36db2de5912477c389"} Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.249688 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35219f83-5e99-476a-9f5d-979f0739127d","Type":"ContainerDied","Data":"6b8729c32e1f0183f8459190ebae931d9f9f372b54a1ca2d050c5519def50994"} Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.249719 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8729c32e1f0183f8459190ebae931d9f9f372b54a1ca2d050c5519def50994" Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.249703 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.252299 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.252312 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5bef92ad-0cb2-491e-a43d-cb01b5acd441","Type":"ContainerDied","Data":"d9fe92e1a1b687c204e7d4e53f74661d4d0451d20752211994025c733a4ae6cc"} Dec 13 22:17:52 crc kubenswrapper[4866]: I1213 22:17:52.252355 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9fe92e1a1b687c204e7d4e53f74661d4d0451d20752211994025c733a4ae6cc" Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.239707 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.245492 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wbx6k" Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.262686 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zgx58" podStartSLOduration=22.262668862 podStartE2EDuration="22.262668862s" podCreationTimestamp="2025-12-13 22:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:17:52.267288165 +0000 UTC m=+70.308626717" watchObservedRunningTime="2025-12-13 22:17:53.262668862 +0000 UTC m=+71.304007404" Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.658346 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.873945 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.873990 4866 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.874114 4866 patch_prober.go:28] interesting pod/downloads-7954f5f757-tlbgv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 13 22:17:53 crc kubenswrapper[4866]: I1213 22:17:53.874133 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tlbgv" podUID="b5141b6f-cd05-4499-ae0d-cdd14f3f5a61" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 13 22:17:54 crc kubenswrapper[4866]: I1213 22:17:54.158785 4866 patch_prober.go:28] interesting pod/console-f9d7485db-zp55r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 13 22:17:54 crc kubenswrapper[4866]: I1213 22:17:54.158838 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zp55r" podUID="0f6a72e7-f4e0-4628-91ad-c3f81514f9f9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 13 22:17:55 crc kubenswrapper[4866]: E1213 22:17:55.037105 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:17:55 crc kubenswrapper[4866]: E1213 22:17:55.046389 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:17:55 crc kubenswrapper[4866]: E1213 22:17:55.048772 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:17:55 crc kubenswrapper[4866]: E1213 22:17:55.048841 4866 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:18:03 crc kubenswrapper[4866]: I1213 22:18:03.881391 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tlbgv" Dec 13 22:18:04 crc kubenswrapper[4866]: I1213 22:18:04.169479 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:18:04 crc kubenswrapper[4866]: I1213 22:18:04.175247 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zp55r" Dec 13 22:18:05 crc kubenswrapper[4866]: E1213 22:18:05.031361 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:05 crc kubenswrapper[4866]: E1213 22:18:05.033094 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:05 crc kubenswrapper[4866]: E1213 22:18:05.034491 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:05 crc kubenswrapper[4866]: E1213 22:18:05.034549 4866 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:18:09 crc kubenswrapper[4866]: I1213 22:18:09.941829 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:18:09 crc kubenswrapper[4866]: I1213 22:18:09.952702 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 13 22:18:12 crc kubenswrapper[4866]: I1213 22:18:12.927695 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kg5zb_eb8f2404-0acf-4a0a-a581-b5c767351742/kube-multus-additional-cni-plugins/0.log" Dec 13 22:18:12 crc kubenswrapper[4866]: I1213 22:18:12.927994 4866 generic.go:334] "Generic (PLEG): container finished" podID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" exitCode=137 Dec 13 22:18:12 crc kubenswrapper[4866]: I1213 22:18:12.928055 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" event={"ID":"eb8f2404-0acf-4a0a-a581-b5c767351742","Type":"ContainerDied","Data":"9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40"} Dec 13 22:18:14 crc kubenswrapper[4866]: I1213 22:18:14.710506 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r85zh" Dec 13 22:18:14 crc kubenswrapper[4866]: I1213 22:18:14.729238 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.729218271 podStartE2EDuration="5.729218271s" podCreationTimestamp="2025-12-13 22:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:18:12.231021206 +0000 UTC m=+90.272359758" watchObservedRunningTime="2025-12-13 22:18:14.729218271 +0000 UTC m=+92.770556823" Dec 13 22:18:15 crc kubenswrapper[4866]: E1213 22:18:15.027929 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:15 crc kubenswrapper[4866]: E1213 22:18:15.028480 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:15 crc kubenswrapper[4866]: E1213 22:18:15.028943 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:15 crc kubenswrapper[4866]: E1213 22:18:15.028995 4866 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:18:18 crc kubenswrapper[4866]: I1213 22:18:18.761900 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.356437 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 22:18:21 crc kubenswrapper[4866]: E1213 22:18:21.356696 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bef92ad-0cb2-491e-a43d-cb01b5acd441" containerName="pruner" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.356711 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef92ad-0cb2-491e-a43d-cb01b5acd441" containerName="pruner" Dec 13 22:18:21 crc kubenswrapper[4866]: E1213 22:18:21.356724 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35219f83-5e99-476a-9f5d-979f0739127d" containerName="pruner" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.356731 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="35219f83-5e99-476a-9f5d-979f0739127d" containerName="pruner" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.356836 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bef92ad-0cb2-491e-a43d-cb01b5acd441" containerName="pruner" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.356859 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="35219f83-5e99-476a-9f5d-979f0739127d" containerName="pruner" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.357358 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.362618 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.364412 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.368963 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.403569 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0eba0853-9926-49f5-be98-96822f62de20-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.403606 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0eba0853-9926-49f5-be98-96822f62de20-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.504733 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0eba0853-9926-49f5-be98-96822f62de20-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.504778 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0eba0853-9926-49f5-be98-96822f62de20-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.504865 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0eba0853-9926-49f5-be98-96822f62de20-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.532078 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0eba0853-9926-49f5-be98-96822f62de20-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:21 crc kubenswrapper[4866]: I1213 22:18:21.714559 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: E1213 22:18:25.028028 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:25 crc kubenswrapper[4866]: E1213 22:18:25.028767 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:25 crc kubenswrapper[4866]: E1213 22:18:25.029118 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:25 crc kubenswrapper[4866]: E1213 22:18:25.029141 4866 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.355370 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.357306 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.369127 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.458911 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-var-lock\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.459016 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kube-api-access\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.459086 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.560558 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kube-api-access\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.560651 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.560687 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-var-lock\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.560752 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-var-lock\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.560824 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.585686 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kube-api-access\") pod \"installer-9-crc\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:25 crc kubenswrapper[4866]: I1213 22:18:25.677503 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:18:35 crc kubenswrapper[4866]: E1213 22:18:35.027630 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:35 crc kubenswrapper[4866]: E1213 22:18:35.028705 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:35 crc kubenswrapper[4866]: E1213 22:18:35.029269 4866 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 22:18:35 crc kubenswrapper[4866]: E1213 22:18:35.029297 4866 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:18:36 crc kubenswrapper[4866]: E1213 22:18:36.527925 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 13 22:18:36 crc kubenswrapper[4866]: E1213 22:18:36.528497 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmzqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6hxqc_openshift-marketplace(902bee05-89e6-48b6-becf-d715d04dd8cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:36 crc kubenswrapper[4866]: E1213 22:18:36.529693 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6hxqc" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" Dec 13 22:18:38 crc kubenswrapper[4866]: E1213 22:18:38.779148 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 13 22:18:38 crc kubenswrapper[4866]: E1213 22:18:38.779664 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f7b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cwcm4_openshift-marketplace(bb228672-927d-42e6-bcde-d5733629cae2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:38 crc kubenswrapper[4866]: E1213 22:18:38.780896 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cwcm4" podUID="bb228672-927d-42e6-bcde-d5733629cae2" Dec 13 22:18:40 crc kubenswrapper[4866]: E1213 22:18:40.546643 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 13 22:18:40 crc kubenswrapper[4866]: E1213 22:18:40.547122 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trz64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sj6bb_openshift-marketplace(a61d5864-61a4-46a9-a4f5-020d4ed879cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:40 crc kubenswrapper[4866]: E1213 22:18:40.548342 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sj6bb" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" Dec 13 22:18:41 crc kubenswrapper[4866]: E1213 22:18:41.405903 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cwcm4" podUID="bb228672-927d-42e6-bcde-d5733629cae2" Dec 13 22:18:41 crc kubenswrapper[4866]: E1213 22:18:41.406334 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sj6bb" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.474034 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kg5zb_eb8f2404-0acf-4a0a-a581-b5c767351742/kube-multus-additional-cni-plugins/0.log" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.474136 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.482459 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/eb8f2404-0acf-4a0a-a581-b5c767351742-cni-sysctl-allowlist\") pod \"eb8f2404-0acf-4a0a-a581-b5c767351742\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.482586 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2404-0acf-4a0a-a581-b5c767351742-ready\") pod \"eb8f2404-0acf-4a0a-a581-b5c767351742\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.482670 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb8f2404-0acf-4a0a-a581-b5c767351742-tuning-conf-dir\") pod \"eb8f2404-0acf-4a0a-a581-b5c767351742\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.482742 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58d4z\" (UniqueName: \"kubernetes.io/projected/eb8f2404-0acf-4a0a-a581-b5c767351742-kube-api-access-58d4z\") pod \"eb8f2404-0acf-4a0a-a581-b5c767351742\" (UID: \"eb8f2404-0acf-4a0a-a581-b5c767351742\") " Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.483361 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb8f2404-0acf-4a0a-a581-b5c767351742-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "eb8f2404-0acf-4a0a-a581-b5c767351742" (UID: "eb8f2404-0acf-4a0a-a581-b5c767351742"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.483965 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb8f2404-0acf-4a0a-a581-b5c767351742-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "eb8f2404-0acf-4a0a-a581-b5c767351742" (UID: "eb8f2404-0acf-4a0a-a581-b5c767351742"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.484455 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8f2404-0acf-4a0a-a581-b5c767351742-ready" (OuterVolumeSpecName: "ready") pod "eb8f2404-0acf-4a0a-a581-b5c767351742" (UID: "eb8f2404-0acf-4a0a-a581-b5c767351742"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.495891 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8f2404-0acf-4a0a-a581-b5c767351742-kube-api-access-58d4z" (OuterVolumeSpecName: "kube-api-access-58d4z") pod "eb8f2404-0acf-4a0a-a581-b5c767351742" (UID: "eb8f2404-0acf-4a0a-a581-b5c767351742"). InnerVolumeSpecName "kube-api-access-58d4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:18:41 crc kubenswrapper[4866]: E1213 22:18:41.510360 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 13 22:18:41 crc kubenswrapper[4866]: E1213 22:18:41.510497 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m888h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ppd79_openshift-marketplace(9d76fbae-d65b-45df-aee5-0924d2ec35e8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:41 crc kubenswrapper[4866]: E1213 22:18:41.511692 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ppd79" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.584604 4866 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb8f2404-0acf-4a0a-a581-b5c767351742-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.584641 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58d4z\" (UniqueName: \"kubernetes.io/projected/eb8f2404-0acf-4a0a-a581-b5c767351742-kube-api-access-58d4z\") on node \"crc\" DevicePath \"\"" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.584653 4866 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/eb8f2404-0acf-4a0a-a581-b5c767351742-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 13 22:18:41 crc kubenswrapper[4866]: I1213 22:18:41.584662 4866 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2404-0acf-4a0a-a581-b5c767351742-ready\") on node \"crc\" DevicePath \"\"" Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.095793 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kg5zb_eb8f2404-0acf-4a0a-a581-b5c767351742/kube-multus-additional-cni-plugins/0.log" Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.095851 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" event={"ID":"eb8f2404-0acf-4a0a-a581-b5c767351742","Type":"ContainerDied","Data":"8d82516481abbca7404ebeaca0fe2a5f161163e0fb403bb6d23c7058f1cebf1a"} Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.095890 4866 scope.go:117] "RemoveContainer" containerID="9c9aaf4f8190bdeeec308225ac1adf791809e7a295803665e1107ae7dd0e5f40" Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.096006 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kg5zb" Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.126858 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kg5zb"] Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.126938 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kg5zb"] Dec 13 22:18:42 crc kubenswrapper[4866]: I1213 22:18:42.222011 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" path="/var/lib/kubelet/pods/eb8f2404-0acf-4a0a-a581-b5c767351742/volumes" Dec 13 22:18:44 crc kubenswrapper[4866]: E1213 22:18:44.898983 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 13 22:18:44 crc kubenswrapper[4866]: E1213 22:18:44.899492 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2wcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rw67t_openshift-marketplace(d019a2fd-1864-4c5b-8deb-62c898466850): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:44 crc kubenswrapper[4866]: E1213 22:18:44.900598 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rw67t" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.883824 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rw67t" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.908541 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.908718 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6km9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9rw56_openshift-marketplace(b87d7d9b-aff1-45c9-8824-35fe7442cc07): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.910546 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9rw56" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.911734 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.911891 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tztd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ls9kr_openshift-marketplace(bfe21f96-9dfe-4862-8010-11559fa1c2b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.913270 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ls9kr" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.920665 4866 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.920788 4866 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7wcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zdvwj_openshift-marketplace(1a29082d-49a1-4625-9d9b-568ef75773c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 22:18:45 crc kubenswrapper[4866]: E1213 22:18:45.922075 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zdvwj" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" Dec 13 22:18:46 crc kubenswrapper[4866]: E1213 22:18:46.123514 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ls9kr" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" Dec 13 22:18:46 crc kubenswrapper[4866]: E1213 22:18:46.123712 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9rw56" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" Dec 13 22:18:46 crc kubenswrapper[4866]: I1213 22:18:46.299750 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 22:18:46 crc kubenswrapper[4866]: I1213 22:18:46.365176 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 22:18:47 crc kubenswrapper[4866]: I1213 22:18:47.143425 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98e21a24-2f77-4e7a-a3ec-8c50a1822441","Type":"ContainerStarted","Data":"22dff46a226ad7c07711c7532e0bc7b1572131faa0b0a708c945294e755d3bfd"} Dec 13 22:18:47 crc kubenswrapper[4866]: I1213 22:18:47.143472 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98e21a24-2f77-4e7a-a3ec-8c50a1822441","Type":"ContainerStarted","Data":"1532517d6d7c33f855174556233ed56a34be065fcfb6fc2d5cbadbcb59570a47"} Dec 13 22:18:47 crc kubenswrapper[4866]: I1213 22:18:47.145129 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0eba0853-9926-49f5-be98-96822f62de20","Type":"ContainerStarted","Data":"a7ac88110da6116c26d47bcf70db9db5c551774f55288dfabc303eb23e76a2cb"} Dec 13 22:18:47 crc kubenswrapper[4866]: I1213 22:18:47.145162 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0eba0853-9926-49f5-be98-96822f62de20","Type":"ContainerStarted","Data":"766de13d8e36af72483c0debc342336d6836da7415820642fad7fa8b046678d7"} Dec 13 22:18:47 crc kubenswrapper[4866]: I1213 22:18:47.161728 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=22.161705529 podStartE2EDuration="22.161705529s" podCreationTimestamp="2025-12-13 22:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:18:47.156491049 +0000 UTC m=+125.197829601" watchObservedRunningTime="2025-12-13 22:18:47.161705529 +0000 UTC m=+125.203044081" Dec 13 22:18:47 crc kubenswrapper[4866]: I1213 22:18:47.181600 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=26.181579908 podStartE2EDuration="26.181579908s" podCreationTimestamp="2025-12-13 22:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:18:47.178342236 +0000 UTC m=+125.219680788" watchObservedRunningTime="2025-12-13 22:18:47.181579908 +0000 UTC m=+125.222918460" Dec 13 22:18:48 crc kubenswrapper[4866]: I1213 22:18:48.150831 4866 generic.go:334] "Generic (PLEG): container finished" podID="0eba0853-9926-49f5-be98-96822f62de20" containerID="a7ac88110da6116c26d47bcf70db9db5c551774f55288dfabc303eb23e76a2cb" exitCode=0 Dec 13 22:18:48 crc kubenswrapper[4866]: I1213 22:18:48.150898 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0eba0853-9926-49f5-be98-96822f62de20","Type":"ContainerDied","Data":"a7ac88110da6116c26d47bcf70db9db5c551774f55288dfabc303eb23e76a2cb"} Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.392446 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.399887 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0eba0853-9926-49f5-be98-96822f62de20-kubelet-dir\") pod \"0eba0853-9926-49f5-be98-96822f62de20\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.399940 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0eba0853-9926-49f5-be98-96822f62de20-kube-api-access\") pod \"0eba0853-9926-49f5-be98-96822f62de20\" (UID: \"0eba0853-9926-49f5-be98-96822f62de20\") " Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.400873 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0eba0853-9926-49f5-be98-96822f62de20-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0eba0853-9926-49f5-be98-96822f62de20" (UID: "0eba0853-9926-49f5-be98-96822f62de20"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.413318 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eba0853-9926-49f5-be98-96822f62de20-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0eba0853-9926-49f5-be98-96822f62de20" (UID: "0eba0853-9926-49f5-be98-96822f62de20"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.501189 4866 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0eba0853-9926-49f5-be98-96822f62de20-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:18:49 crc kubenswrapper[4866]: I1213 22:18:49.501235 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0eba0853-9926-49f5-be98-96822f62de20-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:18:50 crc kubenswrapper[4866]: I1213 22:18:50.164555 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0eba0853-9926-49f5-be98-96822f62de20","Type":"ContainerDied","Data":"766de13d8e36af72483c0debc342336d6836da7415820642fad7fa8b046678d7"} Dec 13 22:18:50 crc kubenswrapper[4866]: I1213 22:18:50.164839 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766de13d8e36af72483c0debc342336d6836da7415820642fad7fa8b046678d7" Dec 13 22:18:50 crc kubenswrapper[4866]: I1213 22:18:50.164911 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 22:18:52 crc kubenswrapper[4866]: I1213 22:18:52.182345 4866 generic.go:334] "Generic (PLEG): container finished" podID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerID="4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0" exitCode=0 Dec 13 22:18:52 crc kubenswrapper[4866]: I1213 22:18:52.182418 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hxqc" event={"ID":"902bee05-89e6-48b6-becf-d715d04dd8cd","Type":"ContainerDied","Data":"4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0"} Dec 13 22:18:54 crc kubenswrapper[4866]: I1213 22:18:54.097394 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fjf67"] Dec 13 22:18:54 crc kubenswrapper[4866]: I1213 22:18:54.194974 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hxqc" event={"ID":"902bee05-89e6-48b6-becf-d715d04dd8cd","Type":"ContainerStarted","Data":"ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da"} Dec 13 22:18:54 crc kubenswrapper[4866]: I1213 22:18:54.198115 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerStarted","Data":"1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302"} Dec 13 22:18:54 crc kubenswrapper[4866]: I1213 22:18:54.300544 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6hxqc" podStartSLOduration=7.983199841 podStartE2EDuration="1m11.300526537s" podCreationTimestamp="2025-12-13 22:17:43 +0000 UTC" firstStartedPulling="2025-12-13 22:17:50.222264812 +0000 UTC m=+68.263603364" lastFinishedPulling="2025-12-13 22:18:53.539591508 +0000 UTC m=+131.580930060" observedRunningTime="2025-12-13 22:18:54.253366124 +0000 UTC m=+132.294704676" watchObservedRunningTime="2025-12-13 22:18:54.300526537 +0000 UTC m=+132.341865089" Dec 13 22:18:55 crc kubenswrapper[4866]: I1213 22:18:55.205709 4866 generic.go:334] "Generic (PLEG): container finished" podID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerID="1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302" exitCode=0 Dec 13 22:18:55 crc kubenswrapper[4866]: I1213 22:18:55.205780 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerDied","Data":"1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302"} Dec 13 22:18:56 crc kubenswrapper[4866]: I1213 22:18:56.228073 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerStarted","Data":"74096b8a3b2ef9c432e8dd51051b2bc5aa5355e1e05271595e1e76b16ad232b5"} Dec 13 22:18:56 crc kubenswrapper[4866]: I1213 22:18:56.229324 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerStarted","Data":"08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84"} Dec 13 22:18:56 crc kubenswrapper[4866]: I1213 22:18:56.252461 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sj6bb" podStartSLOduration=6.598000134 podStartE2EDuration="1m14.25244247s" podCreationTimestamp="2025-12-13 22:17:42 +0000 UTC" firstStartedPulling="2025-12-13 22:17:48.106862975 +0000 UTC m=+66.148201527" lastFinishedPulling="2025-12-13 22:18:55.761305311 +0000 UTC m=+133.802643863" observedRunningTime="2025-12-13 22:18:56.249922047 +0000 UTC m=+134.291260599" watchObservedRunningTime="2025-12-13 22:18:56.25244247 +0000 UTC m=+134.293781022" Dec 13 22:18:57 crc kubenswrapper[4866]: I1213 22:18:57.236849 4866 generic.go:334] "Generic (PLEG): container finished" podID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerID="74096b8a3b2ef9c432e8dd51051b2bc5aa5355e1e05271595e1e76b16ad232b5" exitCode=0 Dec 13 22:18:57 crc kubenswrapper[4866]: I1213 22:18:57.236900 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerDied","Data":"74096b8a3b2ef9c432e8dd51051b2bc5aa5355e1e05271595e1e76b16ad232b5"} Dec 13 22:18:58 crc kubenswrapper[4866]: I1213 22:18:58.259556 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerStarted","Data":"e3987f1947baedcac73e8d3586ecdc56f6ab49989b355d762123519e80462716"} Dec 13 22:18:59 crc kubenswrapper[4866]: I1213 22:18:59.266096 4866 generic.go:334] "Generic (PLEG): container finished" podID="bb228672-927d-42e6-bcde-d5733629cae2" containerID="e3987f1947baedcac73e8d3586ecdc56f6ab49989b355d762123519e80462716" exitCode=0 Dec 13 22:18:59 crc kubenswrapper[4866]: I1213 22:18:59.266155 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerDied","Data":"e3987f1947baedcac73e8d3586ecdc56f6ab49989b355d762123519e80462716"} Dec 13 22:18:59 crc kubenswrapper[4866]: I1213 22:18:59.268678 4866 generic.go:334] "Generic (PLEG): container finished" podID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerID="dba0a8f06ae8cfdb6a18d15e00cc5244e7a2ebf5bcc59654f478426218918e19" exitCode=0 Dec 13 22:18:59 crc kubenswrapper[4866]: I1213 22:18:59.268756 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerDied","Data":"dba0a8f06ae8cfdb6a18d15e00cc5244e7a2ebf5bcc59654f478426218918e19"} Dec 13 22:18:59 crc kubenswrapper[4866]: I1213 22:18:59.270614 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerStarted","Data":"d22264a5a2852871716c5b6ce8be84a222012c3e13e2b87354b37ef3a8d5c0c1"} Dec 13 22:18:59 crc kubenswrapper[4866]: I1213 22:18:59.370850 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ppd79" podStartSLOduration=6.348475521 podStartE2EDuration="1m18.370828747s" podCreationTimestamp="2025-12-13 22:17:41 +0000 UTC" firstStartedPulling="2025-12-13 22:17:46.066001625 +0000 UTC m=+64.107340177" lastFinishedPulling="2025-12-13 22:18:58.088354851 +0000 UTC m=+136.129693403" observedRunningTime="2025-12-13 22:18:59.36782054 +0000 UTC m=+137.409159092" watchObservedRunningTime="2025-12-13 22:18:59.370828747 +0000 UTC m=+137.412167299" Dec 13 22:19:00 crc kubenswrapper[4866]: I1213 22:19:00.282930 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerStarted","Data":"00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9"} Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.289395 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerStarted","Data":"9a0f4eb4f40e60860659e20deaf86ea2ea0db1ebde2ef478078b7d373d67fd83"} Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.292416 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerStarted","Data":"755edc36d51cf054e5c2a87215e40b6b12ffcf129e33d41d8e49f714abcf2b57"} Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.294267 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerStarted","Data":"6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770"} Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.296103 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerStarted","Data":"aa48726995187ce0e28e1f8e69d6612b22876d04b52ad06bf09592ef4f801bb4"} Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.315678 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ls9kr" podStartSLOduration=6.725955679 podStartE2EDuration="1m18.315659904s" podCreationTimestamp="2025-12-13 22:17:43 +0000 UTC" firstStartedPulling="2025-12-13 22:17:49.14223352 +0000 UTC m=+67.183572072" lastFinishedPulling="2025-12-13 22:19:00.731937745 +0000 UTC m=+138.773276297" observedRunningTime="2025-12-13 22:19:01.315247722 +0000 UTC m=+139.356586274" watchObservedRunningTime="2025-12-13 22:19:01.315659904 +0000 UTC m=+139.356998456" Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.362481 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwcm4" podStartSLOduration=7.711056334 podStartE2EDuration="1m19.362463979s" podCreationTimestamp="2025-12-13 22:17:42 +0000 UTC" firstStartedPulling="2025-12-13 22:17:49.135558183 +0000 UTC m=+67.176896735" lastFinishedPulling="2025-12-13 22:19:00.786965828 +0000 UTC m=+138.828304380" observedRunningTime="2025-12-13 22:19:01.344265072 +0000 UTC m=+139.385603624" watchObservedRunningTime="2025-12-13 22:19:01.362463979 +0000 UTC m=+139.403802531" Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.516528 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:19:01 crc kubenswrapper[4866]: I1213 22:19:01.519885 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.311496 4866 generic.go:334] "Generic (PLEG): container finished" podID="d019a2fd-1864-4c5b-8deb-62c898466850" containerID="755edc36d51cf054e5c2a87215e40b6b12ffcf129e33d41d8e49f714abcf2b57" exitCode=0 Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.311550 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerDied","Data":"755edc36d51cf054e5c2a87215e40b6b12ffcf129e33d41d8e49f714abcf2b57"} Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.313761 4866 generic.go:334] "Generic (PLEG): container finished" podID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerID="00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9" exitCode=0 Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.313808 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerDied","Data":"00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9"} Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.367044 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.992013 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:19:02 crc kubenswrapper[4866]: I1213 22:19:02.992175 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.034935 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.365801 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.710153 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.710211 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.725793 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.755631 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.928347 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.928394 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:19:03 crc kubenswrapper[4866]: I1213 22:19:03.963485 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:19:04 crc kubenswrapper[4866]: I1213 22:19:04.365002 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:19:04 crc kubenswrapper[4866]: I1213 22:19:04.462725 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:19:04 crc kubenswrapper[4866]: I1213 22:19:04.462772 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:19:04 crc kubenswrapper[4866]: I1213 22:19:04.497033 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:19:05 crc kubenswrapper[4866]: I1213 22:19:05.329525 4866 generic.go:334] "Generic (PLEG): container finished" podID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerID="6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770" exitCode=0 Dec 13 22:19:05 crc kubenswrapper[4866]: I1213 22:19:05.329766 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerDied","Data":"6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770"} Dec 13 22:19:05 crc kubenswrapper[4866]: I1213 22:19:05.374332 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:19:06 crc kubenswrapper[4866]: I1213 22:19:06.663499 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppd79"] Dec 13 22:19:06 crc kubenswrapper[4866]: I1213 22:19:06.665601 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ppd79" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="registry-server" containerID="cri-o://d22264a5a2852871716c5b6ce8be84a222012c3e13e2b87354b37ef3a8d5c0c1" gracePeriod=2 Dec 13 22:19:08 crc kubenswrapper[4866]: I1213 22:19:08.346857 4866 generic.go:334] "Generic (PLEG): container finished" podID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerID="d22264a5a2852871716c5b6ce8be84a222012c3e13e2b87354b37ef3a8d5c0c1" exitCode=0 Dec 13 22:19:08 crc kubenswrapper[4866]: I1213 22:19:08.346942 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerDied","Data":"d22264a5a2852871716c5b6ce8be84a222012c3e13e2b87354b37ef3a8d5c0c1"} Dec 13 22:19:08 crc kubenswrapper[4866]: I1213 22:19:08.477112 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ls9kr"] Dec 13 22:19:08 crc kubenswrapper[4866]: I1213 22:19:08.477838 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ls9kr" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="registry-server" containerID="cri-o://9a0f4eb4f40e60860659e20deaf86ea2ea0db1ebde2ef478078b7d373d67fd83" gracePeriod=2 Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.264473 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.353634 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppd79" event={"ID":"9d76fbae-d65b-45df-aee5-0924d2ec35e8","Type":"ContainerDied","Data":"d8ec1824da967253ebc4c0ca44384ad6ee5cc096781c99e91564f62fdc570610"} Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.353685 4866 scope.go:117] "RemoveContainer" containerID="d22264a5a2852871716c5b6ce8be84a222012c3e13e2b87354b37ef3a8d5c0c1" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.353845 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppd79" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.434560 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m888h\" (UniqueName: \"kubernetes.io/projected/9d76fbae-d65b-45df-aee5-0924d2ec35e8-kube-api-access-m888h\") pod \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.434702 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-catalog-content\") pod \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.434782 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-utilities\") pod \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\" (UID: \"9d76fbae-d65b-45df-aee5-0924d2ec35e8\") " Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.435695 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-utilities" (OuterVolumeSpecName: "utilities") pod "9d76fbae-d65b-45df-aee5-0924d2ec35e8" (UID: "9d76fbae-d65b-45df-aee5-0924d2ec35e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.441696 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d76fbae-d65b-45df-aee5-0924d2ec35e8-kube-api-access-m888h" (OuterVolumeSpecName: "kube-api-access-m888h") pod "9d76fbae-d65b-45df-aee5-0924d2ec35e8" (UID: "9d76fbae-d65b-45df-aee5-0924d2ec35e8"). InnerVolumeSpecName "kube-api-access-m888h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.487144 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d76fbae-d65b-45df-aee5-0924d2ec35e8" (UID: "9d76fbae-d65b-45df-aee5-0924d2ec35e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.536588 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m888h\" (UniqueName: \"kubernetes.io/projected/9d76fbae-d65b-45df-aee5-0924d2ec35e8-kube-api-access-m888h\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.536624 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.536636 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76fbae-d65b-45df-aee5-0924d2ec35e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.684661 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppd79"] Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.688073 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ppd79"] Dec 13 22:19:09 crc kubenswrapper[4866]: I1213 22:19:09.718619 4866 scope.go:117] "RemoveContainer" containerID="74096b8a3b2ef9c432e8dd51051b2bc5aa5355e1e05271595e1e76b16ad232b5" Dec 13 22:19:10 crc kubenswrapper[4866]: I1213 22:19:10.220134 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" path="/var/lib/kubelet/pods/9d76fbae-d65b-45df-aee5-0924d2ec35e8/volumes" Dec 13 22:19:11 crc kubenswrapper[4866]: I1213 22:19:11.370152 4866 generic.go:334] "Generic (PLEG): container finished" podID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerID="9a0f4eb4f40e60860659e20deaf86ea2ea0db1ebde2ef478078b7d373d67fd83" exitCode=0 Dec 13 22:19:11 crc kubenswrapper[4866]: I1213 22:19:11.370211 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerDied","Data":"9a0f4eb4f40e60860659e20deaf86ea2ea0db1ebde2ef478078b7d373d67fd83"} Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.570996 4866 scope.go:117] "RemoveContainer" containerID="d4d8923b2fdeb373960fcd4407dddd029e4eaa9a2f22975f1e85705d4f8fb4e6" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.627526 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.776678 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-utilities\") pod \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.777014 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-catalog-content\") pod \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.777098 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztd2\" (UniqueName: \"kubernetes.io/projected/bfe21f96-9dfe-4862-8010-11559fa1c2b4-kube-api-access-tztd2\") pod \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\" (UID: \"bfe21f96-9dfe-4862-8010-11559fa1c2b4\") " Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.777443 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-utilities" (OuterVolumeSpecName: "utilities") pod "bfe21f96-9dfe-4862-8010-11559fa1c2b4" (UID: "bfe21f96-9dfe-4862-8010-11559fa1c2b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.789424 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe21f96-9dfe-4862-8010-11559fa1c2b4-kube-api-access-tztd2" (OuterVolumeSpecName: "kube-api-access-tztd2") pod "bfe21f96-9dfe-4862-8010-11559fa1c2b4" (UID: "bfe21f96-9dfe-4862-8010-11559fa1c2b4"). InnerVolumeSpecName "kube-api-access-tztd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.798174 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe21f96-9dfe-4862-8010-11559fa1c2b4" (UID: "bfe21f96-9dfe-4862-8010-11559fa1c2b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.878331 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.878368 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe21f96-9dfe-4862-8010-11559fa1c2b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:12 crc kubenswrapper[4866]: I1213 22:19:12.878382 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztd2\" (UniqueName: \"kubernetes.io/projected/bfe21f96-9dfe-4862-8010-11559fa1c2b4-kube-api-access-tztd2\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.383848 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerStarted","Data":"be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3"} Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.390192 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ls9kr" event={"ID":"bfe21f96-9dfe-4862-8010-11559fa1c2b4","Type":"ContainerDied","Data":"9c35ce655382e6f2c201c991306d41d1e455a8419b6b5f852baa74d837ceebfb"} Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.390244 4866 scope.go:117] "RemoveContainer" containerID="9a0f4eb4f40e60860659e20deaf86ea2ea0db1ebde2ef478078b7d373d67fd83" Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.390265 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ls9kr" Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.415976 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9rw56" podStartSLOduration=6.035805507 podStartE2EDuration="1m28.415960688s" podCreationTimestamp="2025-12-13 22:17:45 +0000 UTC" firstStartedPulling="2025-12-13 22:17:50.190947067 +0000 UTC m=+68.232285619" lastFinishedPulling="2025-12-13 22:19:12.571102248 +0000 UTC m=+150.612440800" observedRunningTime="2025-12-13 22:19:13.408190103 +0000 UTC m=+151.449528665" watchObservedRunningTime="2025-12-13 22:19:13.415960688 +0000 UTC m=+151.457299230" Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.419417 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ls9kr"] Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.421135 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ls9kr"] Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.746338 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:19:13 crc kubenswrapper[4866]: I1213 22:19:13.997251 4866 scope.go:117] "RemoveContainer" containerID="dba0a8f06ae8cfdb6a18d15e00cc5244e7a2ebf5bcc59654f478426218918e19" Dec 13 22:19:14 crc kubenswrapper[4866]: I1213 22:19:14.219425 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" path="/var/lib/kubelet/pods/bfe21f96-9dfe-4862-8010-11559fa1c2b4/volumes" Dec 13 22:19:16 crc kubenswrapper[4866]: I1213 22:19:16.177534 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:19:16 crc kubenswrapper[4866]: I1213 22:19:16.178144 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:19:16 crc kubenswrapper[4866]: I1213 22:19:16.373177 4866 scope.go:117] "RemoveContainer" containerID="e052c771b7a4945c01a0c7681b5e54b7a85de6e5393512a2856ceafc790d0176" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.070920 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwcm4"] Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.071239 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwcm4" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="registry-server" containerID="cri-o://aa48726995187ce0e28e1f8e69d6612b22876d04b52ad06bf09592ef4f801bb4" gracePeriod=2 Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.239115 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9rw56" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="registry-server" probeResult="failure" output=< Dec 13 22:19:17 crc kubenswrapper[4866]: timeout: failed to connect service ":50051" within 1s Dec 13 22:19:17 crc kubenswrapper[4866]: > Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.420202 4866 generic.go:334] "Generic (PLEG): container finished" podID="bb228672-927d-42e6-bcde-d5733629cae2" containerID="aa48726995187ce0e28e1f8e69d6612b22876d04b52ad06bf09592ef4f801bb4" exitCode=0 Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.420269 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerDied","Data":"aa48726995187ce0e28e1f8e69d6612b22876d04b52ad06bf09592ef4f801bb4"} Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.426750 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerStarted","Data":"dd75b0e02026e02d0df6c6baf28aa236eac032b321561b66463d203012f1c775"} Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.431902 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerStarted","Data":"4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8"} Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.454553 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rw67t" podStartSLOduration=4.959981655 podStartE2EDuration="1m37.454536628s" podCreationTimestamp="2025-12-13 22:17:40 +0000 UTC" firstStartedPulling="2025-12-13 22:17:43.875154028 +0000 UTC m=+61.916492580" lastFinishedPulling="2025-12-13 22:19:16.369709001 +0000 UTC m=+154.411047553" observedRunningTime="2025-12-13 22:19:17.450729038 +0000 UTC m=+155.492067590" watchObservedRunningTime="2025-12-13 22:19:17.454536628 +0000 UTC m=+155.495875180" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.469197 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.488911 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdvwj" podStartSLOduration=7.908378227 podStartE2EDuration="1m32.488896023s" podCreationTimestamp="2025-12-13 22:17:45 +0000 UTC" firstStartedPulling="2025-12-13 22:17:50.222356024 +0000 UTC m=+68.263694576" lastFinishedPulling="2025-12-13 22:19:14.80287378 +0000 UTC m=+152.844212372" observedRunningTime="2025-12-13 22:19:17.474331541 +0000 UTC m=+155.515670083" watchObservedRunningTime="2025-12-13 22:19:17.488896023 +0000 UTC m=+155.530234575" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.668599 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f7b7\" (UniqueName: \"kubernetes.io/projected/bb228672-927d-42e6-bcde-d5733629cae2-kube-api-access-6f7b7\") pod \"bb228672-927d-42e6-bcde-d5733629cae2\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.668828 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-catalog-content\") pod \"bb228672-927d-42e6-bcde-d5733629cae2\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.668966 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-utilities\") pod \"bb228672-927d-42e6-bcde-d5733629cae2\" (UID: \"bb228672-927d-42e6-bcde-d5733629cae2\") " Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.669936 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-utilities" (OuterVolumeSpecName: "utilities") pod "bb228672-927d-42e6-bcde-d5733629cae2" (UID: "bb228672-927d-42e6-bcde-d5733629cae2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.678422 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb228672-927d-42e6-bcde-d5733629cae2-kube-api-access-6f7b7" (OuterVolumeSpecName: "kube-api-access-6f7b7") pod "bb228672-927d-42e6-bcde-d5733629cae2" (UID: "bb228672-927d-42e6-bcde-d5733629cae2"). InnerVolumeSpecName "kube-api-access-6f7b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.713656 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb228672-927d-42e6-bcde-d5733629cae2" (UID: "bb228672-927d-42e6-bcde-d5733629cae2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.770777 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f7b7\" (UniqueName: \"kubernetes.io/projected/bb228672-927d-42e6-bcde-d5733629cae2-kube-api-access-6f7b7\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.770803 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:17 crc kubenswrapper[4866]: I1213 22:19:17.770829 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb228672-927d-42e6-bcde-d5733629cae2-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.442024 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwcm4" event={"ID":"bb228672-927d-42e6-bcde-d5733629cae2","Type":"ContainerDied","Data":"85a8fab3dc86ed7d029b9e356a0c53a789412d2290c3588cf26c46d37e71df2e"} Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.442129 4866 scope.go:117] "RemoveContainer" containerID="aa48726995187ce0e28e1f8e69d6612b22876d04b52ad06bf09592ef4f801bb4" Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.442820 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwcm4" Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.457662 4866 scope.go:117] "RemoveContainer" containerID="e3987f1947baedcac73e8d3586ecdc56f6ab49989b355d762123519e80462716" Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.463557 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwcm4"] Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.470201 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwcm4"] Dec 13 22:19:18 crc kubenswrapper[4866]: I1213 22:19:18.475857 4866 scope.go:117] "RemoveContainer" containerID="c50f2b389e9a83ad67450858e72445525e8e1bcc8c057cbbcf672e1dfd84929e" Dec 13 22:19:19 crc kubenswrapper[4866]: I1213 22:19:19.116650 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" podUID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" containerName="oauth-openshift" containerID="cri-o://49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e" gracePeriod=15 Dec 13 22:19:20 crc kubenswrapper[4866]: I1213 22:19:20.220445 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb228672-927d-42e6-bcde-d5733629cae2" path="/var/lib/kubelet/pods/bb228672-927d-42e6-bcde-d5733629cae2/volumes" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.138408 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.138775 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.193457 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.298799 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.410781 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-error\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.410841 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-policies\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.410882 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-session\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.410922 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b58c7\" (UniqueName: \"kubernetes.io/projected/144e732e-78b7-4e31-8f30-ed505c2ae0e9-kube-api-access-b58c7\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.410943 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-cliconfig\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.410976 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-router-certs\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411004 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-ocp-branding-template\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411024 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-serving-cert\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411089 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-idp-0-file-data\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411119 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-dir\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411141 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-service-ca\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411165 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-trusted-ca-bundle\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411222 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-login\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411273 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-provider-selection\") pod \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\" (UID: \"144e732e-78b7-4e31-8f30-ed505c2ae0e9\") " Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.411787 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.413851 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.413916 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.413949 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.417227 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.417687 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.417705 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.417780 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.418298 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.419121 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.420517 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144e732e-78b7-4e31-8f30-ed505c2ae0e9-kube-api-access-b58c7" (OuterVolumeSpecName: "kube-api-access-b58c7") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "kube-api-access-b58c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.421267 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.422597 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.426438 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "144e732e-78b7-4e31-8f30-ed505c2ae0e9" (UID: "144e732e-78b7-4e31-8f30-ed505c2ae0e9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.462013 4866 generic.go:334] "Generic (PLEG): container finished" podID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" containerID="49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e" exitCode=0 Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.462110 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" event={"ID":"144e732e-78b7-4e31-8f30-ed505c2ae0e9","Type":"ContainerDied","Data":"49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e"} Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.462157 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.462181 4866 scope.go:117] "RemoveContainer" containerID="49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.462168 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fjf67" event={"ID":"144e732e-78b7-4e31-8f30-ed505c2ae0e9","Type":"ContainerDied","Data":"12f894700dc91092ec58aa9d468b28c3a00de3e72557c36c97114a2d58646bf9"} Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.500995 4866 scope.go:117] "RemoveContainer" containerID="49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e" Dec 13 22:19:21 crc kubenswrapper[4866]: E1213 22:19:21.503586 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e\": container with ID starting with 49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e not found: ID does not exist" containerID="49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.503633 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e"} err="failed to get container status \"49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e\": rpc error: code = NotFound desc = could not find container \"49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e\": container with ID starting with 49575d35de63ee7105307233ccc361db454bad77711a325a1274b40559e6c82e not found: ID does not exist" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.504781 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fjf67"] Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.507289 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fjf67"] Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.512897 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.512921 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.512931 4866 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.512983 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.512993 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b58c7\" (UniqueName: \"kubernetes.io/projected/144e732e-78b7-4e31-8f30-ed505c2ae0e9-kube-api-access-b58c7\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513002 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513010 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513019 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513027 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513036 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513064 4866 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/144e732e-78b7-4e31-8f30-ed505c2ae0e9-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513073 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513081 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:21 crc kubenswrapper[4866]: I1213 22:19:21.513091 4866 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/144e732e-78b7-4e31-8f30-ed505c2ae0e9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:22 crc kubenswrapper[4866]: I1213 22:19:22.218662 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" path="/var/lib/kubelet/pods/144e732e-78b7-4e31-8f30-ed505c2ae0e9/volumes" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.230795 4866 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.231126 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074" gracePeriod=15 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.231171 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c" gracePeriod=15 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.231222 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688" gracePeriod=15 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.231192 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335" gracePeriod=15 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.231319 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6" gracePeriod=15 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234231 4866 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234478 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234496 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234508 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234518 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234532 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234539 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234549 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="extract-utilities" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234557 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="extract-utilities" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234567 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234575 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234584 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="extract-content" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234591 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="extract-content" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234600 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="extract-utilities" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234608 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="extract-utilities" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234622 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234630 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234641 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="extract-utilities" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234651 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="extract-utilities" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234660 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234668 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234677 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234686 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234695 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234702 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234711 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eba0853-9926-49f5-be98-96822f62de20" containerName="pruner" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234719 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eba0853-9926-49f5-be98-96822f62de20" containerName="pruner" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234730 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" containerName="oauth-openshift" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234738 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" containerName="oauth-openshift" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234749 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="extract-content" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234756 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="extract-content" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234768 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="extract-content" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234776 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="extract-content" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234788 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234797 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.234809 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234819 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234945 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234961 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234970 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234981 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8f2404-0acf-4a0a-a581-b5c767351742" containerName="kube-multus-additional-cni-plugins" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.234993 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d76fbae-d65b-45df-aee5-0924d2ec35e8" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235002 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235014 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe21f96-9dfe-4862-8010-11559fa1c2b4" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235025 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb228672-927d-42e6-bcde-d5733629cae2" containerName="registry-server" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235035 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235065 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="144e732e-78b7-4e31-8f30-ed505c2ae0e9" containerName="oauth-openshift" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235077 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eba0853-9926-49f5-be98-96822f62de20" containerName="pruner" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.235189 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235198 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.235314 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.241323 4866 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.242782 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.250408 4866 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.275444 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352369 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352423 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352449 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352808 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352841 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352870 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352894 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.352917 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.453747 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.453829 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.453864 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.453915 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.453943 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.453979 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454005 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454029 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454201 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454226 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454218 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454247 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454272 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454281 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454243 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.454365 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.482417 4866 generic.go:334] "Generic (PLEG): container finished" podID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" containerID="22dff46a226ad7c07711c7532e0bc7b1572131faa0b0a708c945294e755d3bfd" exitCode=0 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.482512 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98e21a24-2f77-4e7a-a3ec-8c50a1822441","Type":"ContainerDied","Data":"22dff46a226ad7c07711c7532e0bc7b1572131faa0b0a708c945294e755d3bfd"} Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.483604 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.484068 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.484238 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.485387 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.485992 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c" exitCode=0 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.486014 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6" exitCode=0 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.486024 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335" exitCode=0 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.486033 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688" exitCode=2 Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.486095 4866 scope.go:117] "RemoveContainer" containerID="c5f0c2c3ce34f3961f4be5be9beff2bbf0de2899cfac9219106773fdd8fb1969" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.574745 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:19:24 crc kubenswrapper[4866]: W1213 22:19:24.617839 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c3c78375e07b769e22a3bf680248ffc791708ea05b6a3bc4df51872365426427 WatchSource:0}: Error finding container c3c78375e07b769e22a3bf680248ffc791708ea05b6a3bc4df51872365426427: Status 404 returned error can't find the container with id c3c78375e07b769e22a3bf680248ffc791708ea05b6a3bc4df51872365426427 Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.621293 4866 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880e6627dbfe8b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 22:19:24.620146868 +0000 UTC m=+162.661485420,LastTimestamp:2025-12-13 22:19:24.620146868 +0000 UTC m=+162.661485420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.924411 4866 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.924647 4866 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.924873 4866 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.925120 4866 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.925362 4866 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:24 crc kubenswrapper[4866]: I1213 22:19:24.925389 4866 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 22:19:24 crc kubenswrapper[4866]: E1213 22:19:24.925662 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Dec 13 22:19:25 crc kubenswrapper[4866]: E1213 22:19:25.126403 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.493575 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.496166 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd"} Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.496218 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c3c78375e07b769e22a3bf680248ffc791708ea05b6a3bc4df51872365426427"} Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.497274 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.497632 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: E1213 22:19:25.527687 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.768358 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.768560 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.780449 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.781547 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.782004 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.812211 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.812931 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.813261 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.813576 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.969932 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kubelet-dir\") pod \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.970003 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kube-api-access\") pod \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.970060 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-var-lock\") pod \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\" (UID: \"98e21a24-2f77-4e7a-a3ec-8c50a1822441\") " Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.970035 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98e21a24-2f77-4e7a-a3ec-8c50a1822441" (UID: "98e21a24-2f77-4e7a-a3ec-8c50a1822441"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.970252 4866 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.970255 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-var-lock" (OuterVolumeSpecName: "var-lock") pod "98e21a24-2f77-4e7a-a3ec-8c50a1822441" (UID: "98e21a24-2f77-4e7a-a3ec-8c50a1822441"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:19:25 crc kubenswrapper[4866]: I1213 22:19:25.974377 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98e21a24-2f77-4e7a-a3ec-8c50a1822441" (UID: "98e21a24-2f77-4e7a-a3ec-8c50a1822441"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.071395 4866 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98e21a24-2f77-4e7a-a3ec-8c50a1822441-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.071427 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98e21a24-2f77-4e7a-a3ec-8c50a1822441-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.219126 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.221765 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.222567 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.223027 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.223375 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.259268 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.259985 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.260265 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.260686 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.261115 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: E1213 22:19:26.329314 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.501737 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98e21a24-2f77-4e7a-a3ec-8c50a1822441","Type":"ContainerDied","Data":"1532517d6d7c33f855174556233ed56a34be065fcfb6fc2d5cbadbcb59570a47"} Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.501955 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1532517d6d7c33f855174556233ed56a34be065fcfb6fc2d5cbadbcb59570a47" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.501760 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.506911 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.507088 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.507224 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.507373 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.542878 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.543426 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.543746 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.544181 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.544409 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.702243 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.703007 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.703650 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.703899 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.704114 4866 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.704340 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.704614 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.880755 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.881132 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.881238 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.880921 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.881183 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.881321 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.881750 4866 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.881850 4866 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:26 crc kubenswrapper[4866]: I1213 22:19:26.882003 4866 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.285768 4866 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" volumeName="registry-storage" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.512622 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.513505 4866 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074" exitCode=0 Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.513593 4866 scope.go:117] "RemoveContainer" containerID="315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.513641 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.529726 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.530120 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.530330 4866 scope.go:117] "RemoveContainer" containerID="b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.530327 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.530659 4866 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.530888 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.544870 4866 scope.go:117] "RemoveContainer" containerID="b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.557969 4866 scope.go:117] "RemoveContainer" containerID="4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.576557 4866 scope.go:117] "RemoveContainer" containerID="0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.594716 4866 scope.go:117] "RemoveContainer" containerID="e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.617364 4866 scope.go:117] "RemoveContainer" containerID="315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.618421 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\": container with ID starting with 315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c not found: ID does not exist" containerID="315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.618523 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c"} err="failed to get container status \"315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\": rpc error: code = NotFound desc = could not find container \"315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c\": container with ID starting with 315b5749072410defd6fed09f5cc953c074319c1adda448fccd60a5d25c1f10c not found: ID does not exist" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.618606 4866 scope.go:117] "RemoveContainer" containerID="b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.621196 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\": container with ID starting with b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6 not found: ID does not exist" containerID="b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.621235 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6"} err="failed to get container status \"b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\": rpc error: code = NotFound desc = could not find container \"b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6\": container with ID starting with b4192aeb8b99901fba1d442e613510c1d2dca07650c4a84e6f585d42b4de74c6 not found: ID does not exist" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.621260 4866 scope.go:117] "RemoveContainer" containerID="b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.621657 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\": container with ID starting with b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335 not found: ID does not exist" containerID="b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.621685 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335"} err="failed to get container status \"b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\": rpc error: code = NotFound desc = could not find container \"b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335\": container with ID starting with b57855b825b37a7a5c179476a16a5f4a6d66d9299395ad2bf8972b515ec81335 not found: ID does not exist" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.621730 4866 scope.go:117] "RemoveContainer" containerID="4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.622950 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\": container with ID starting with 4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688 not found: ID does not exist" containerID="4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.622986 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688"} err="failed to get container status \"4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\": rpc error: code = NotFound desc = could not find container \"4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688\": container with ID starting with 4f51680eb649a8778df43c6e4f2389caa6a69254bef0dee34535d27cc69fd688 not found: ID does not exist" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.623451 4866 scope.go:117] "RemoveContainer" containerID="0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.623816 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\": container with ID starting with 0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074 not found: ID does not exist" containerID="0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.623860 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074"} err="failed to get container status \"0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\": rpc error: code = NotFound desc = could not find container \"0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074\": container with ID starting with 0a756e1889ab55c446200026441e5add70e87750f0c7bcd844c6d69410828074 not found: ID does not exist" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.623881 4866 scope.go:117] "RemoveContainer" containerID="e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.624446 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\": container with ID starting with e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd not found: ID does not exist" containerID="e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd" Dec 13 22:19:27 crc kubenswrapper[4866]: I1213 22:19:27.624493 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd"} err="failed to get container status \"e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\": rpc error: code = NotFound desc = could not find container \"e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd\": container with ID starting with e0102bb317f37f8f9a35e3b5c931a3790ef2bfd33f44ee787ad60c138030e0bd not found: ID does not exist" Dec 13 22:19:27 crc kubenswrapper[4866]: E1213 22:19:27.930548 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Dec 13 22:19:28 crc kubenswrapper[4866]: I1213 22:19:28.219758 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 13 22:19:30 crc kubenswrapper[4866]: E1213 22:19:30.405911 4866 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880e6627dbfe8b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 22:19:24.620146868 +0000 UTC m=+162.661485420,LastTimestamp:2025-12-13 22:19:24.620146868 +0000 UTC m=+162.661485420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 22:19:31 crc kubenswrapper[4866]: E1213 22:19:31.131437 4866 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="6.4s" Dec 13 22:19:31 crc kubenswrapper[4866]: I1213 22:19:31.181196 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:19:31 crc kubenswrapper[4866]: I1213 22:19:31.181831 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:31 crc kubenswrapper[4866]: I1213 22:19:31.182579 4866 status_manager.go:851] "Failed to get status for pod" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" pod="openshift-marketplace/community-operators-rw67t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rw67t\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:31 crc kubenswrapper[4866]: I1213 22:19:31.183087 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:31 crc kubenswrapper[4866]: I1213 22:19:31.183418 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:31 crc kubenswrapper[4866]: I1213 22:19:31.183817 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:32 crc kubenswrapper[4866]: I1213 22:19:32.219933 4866 status_manager.go:851] "Failed to get status for pod" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" pod="openshift-marketplace/community-operators-rw67t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rw67t\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:32 crc kubenswrapper[4866]: I1213 22:19:32.220730 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:32 crc kubenswrapper[4866]: I1213 22:19:32.221189 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:32 crc kubenswrapper[4866]: I1213 22:19:32.221657 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:32 crc kubenswrapper[4866]: I1213 22:19:32.222144 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:33 crc kubenswrapper[4866]: I1213 22:19:33.036276 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:19:33 crc kubenswrapper[4866]: I1213 22:19:33.036356 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.213387 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.215278 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.215747 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.216255 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.217487 4866 status_manager.go:851] "Failed to get status for pod" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" pod="openshift-marketplace/community-operators-rw67t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rw67t\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.217785 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.229146 4866 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.229175 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:36 crc kubenswrapper[4866]: E1213 22:19:36.229665 4866 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.230421 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:36 crc kubenswrapper[4866]: W1213 22:19:36.249901 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3795efb3a8419b64e8c83193d13fcf1e13f5b36fa3ed7d732e900c21811ba176 WatchSource:0}: Error finding container 3795efb3a8419b64e8c83193d13fcf1e13f5b36fa3ed7d732e900c21811ba176: Status 404 returned error can't find the container with id 3795efb3a8419b64e8c83193d13fcf1e13f5b36fa3ed7d732e900c21811ba176 Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.569741 4866 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7baa81868baf83a89fe6705150192e057432d6a42a843995fb98cf1f93dbb7ff" exitCode=0 Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.569823 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7baa81868baf83a89fe6705150192e057432d6a42a843995fb98cf1f93dbb7ff"} Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.570000 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3795efb3a8419b64e8c83193d13fcf1e13f5b36fa3ed7d732e900c21811ba176"} Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.570260 4866 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.570273 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:36 crc kubenswrapper[4866]: E1213 22:19:36.570635 4866 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.570694 4866 status_manager.go:851] "Failed to get status for pod" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" pod="openshift-marketplace/redhat-operators-9rw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rw56\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.570934 4866 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.571259 4866 status_manager.go:851] "Failed to get status for pod" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.571738 4866 status_manager.go:851] "Failed to get status for pod" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" pod="openshift-marketplace/community-operators-rw67t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rw67t\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:36 crc kubenswrapper[4866]: I1213 22:19:36.572006 4866 status_manager.go:851] "Failed to get status for pod" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" pod="openshift-marketplace/redhat-operators-zdvwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zdvwj\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 13 22:19:37 crc kubenswrapper[4866]: I1213 22:19:37.584080 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be911f8b73c58bb4aa17e7fee25688baca280509598454f9c7b9cb1abd912b47"} Dec 13 22:19:37 crc kubenswrapper[4866]: I1213 22:19:37.585254 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"448fdf8159ef017bba5aecb5a3c87a7935e5bdf713fec221509c951cd4f78258"} Dec 13 22:19:37 crc kubenswrapper[4866]: I1213 22:19:37.585332 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a20d7f5bb54bcb955d25f45ae2920bd042cda237756e7369dd3409781cd083ba"} Dec 13 22:19:37 crc kubenswrapper[4866]: I1213 22:19:37.585401 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2217a44189c183513796148e0c1296edb36dad6123d4902bd4fb8ef40e05169a"} Dec 13 22:19:38 crc kubenswrapper[4866]: I1213 22:19:38.590448 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3adc92eb75e75e8489f76190d3c3f0c8e498bb23f4de5e22cb59e6c39e78fdf"} Dec 13 22:19:38 crc kubenswrapper[4866]: I1213 22:19:38.590663 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:38 crc kubenswrapper[4866]: I1213 22:19:38.590882 4866 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:38 crc kubenswrapper[4866]: I1213 22:19:38.590910 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:39 crc kubenswrapper[4866]: I1213 22:19:39.598448 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 22:19:39 crc kubenswrapper[4866]: I1213 22:19:39.598507 4866 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5" exitCode=1 Dec 13 22:19:39 crc kubenswrapper[4866]: I1213 22:19:39.598539 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5"} Dec 13 22:19:39 crc kubenswrapper[4866]: I1213 22:19:39.599074 4866 scope.go:117] "RemoveContainer" containerID="2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5" Dec 13 22:19:40 crc kubenswrapper[4866]: I1213 22:19:40.062678 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:19:40 crc kubenswrapper[4866]: I1213 22:19:40.606169 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 22:19:40 crc kubenswrapper[4866]: I1213 22:19:40.606229 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4814e8ff8c50cebef0a8b09f4ac70005aca80213619aac0f4220b8d5d23d714d"} Dec 13 22:19:41 crc kubenswrapper[4866]: I1213 22:19:41.231127 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:41 crc kubenswrapper[4866]: I1213 22:19:41.231181 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:41 crc kubenswrapper[4866]: I1213 22:19:41.235424 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:41 crc kubenswrapper[4866]: I1213 22:19:41.459629 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:19:43 crc kubenswrapper[4866]: I1213 22:19:43.598983 4866 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:43 crc kubenswrapper[4866]: I1213 22:19:43.619595 4866 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:43 crc kubenswrapper[4866]: I1213 22:19:43.619620 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:43 crc kubenswrapper[4866]: I1213 22:19:43.624616 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:43 crc kubenswrapper[4866]: I1213 22:19:43.712404 4866 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="80a4a8cd-a735-47e5-9eee-2a8929b4ac22" Dec 13 22:19:44 crc kubenswrapper[4866]: I1213 22:19:44.623845 4866 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:44 crc kubenswrapper[4866]: I1213 22:19:44.623876 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:44 crc kubenswrapper[4866]: I1213 22:19:44.627064 4866 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="80a4a8cd-a735-47e5-9eee-2a8929b4ac22" Dec 13 22:19:46 crc kubenswrapper[4866]: I1213 22:19:46.527108 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:19:46 crc kubenswrapper[4866]: I1213 22:19:46.527208 4866 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 13 22:19:46 crc kubenswrapper[4866]: I1213 22:19:46.527529 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 13 22:19:51 crc kubenswrapper[4866]: I1213 22:19:51.902630 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 22:19:53 crc kubenswrapper[4866]: I1213 22:19:53.819857 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 22:19:54 crc kubenswrapper[4866]: I1213 22:19:54.005385 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 22:19:54 crc kubenswrapper[4866]: I1213 22:19:54.295965 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 22:19:54 crc kubenswrapper[4866]: I1213 22:19:54.449582 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 13 22:19:54 crc kubenswrapper[4866]: I1213 22:19:54.746238 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.030796 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.057886 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.250811 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.295532 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.385511 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.394925 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.457160 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.463890 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.549534 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.761386 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.775441 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 22:19:55 crc kubenswrapper[4866]: I1213 22:19:55.775440 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.398478 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.526911 4866 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.526967 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.579204 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.669948 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.861375 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 13 22:19:56 crc kubenswrapper[4866]: I1213 22:19:56.881436 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.041847 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.110269 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.171994 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.205889 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.271723 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.344955 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.383930 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.572582 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.650574 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.693873 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.694394 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 22:19:57 crc kubenswrapper[4866]: I1213 22:19:57.698701 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.196298 4866 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.249131 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.349630 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.354760 4866 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.360912 4866 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.367536 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.367513088 podStartE2EDuration="34.367513088s" podCreationTimestamp="2025-12-13 22:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:19:43.617632049 +0000 UTC m=+181.658970601" watchObservedRunningTime="2025-12-13 22:19:58.367513088 +0000 UTC m=+196.408851670" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.368258 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.368319 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7455d675f6-5drqt"] Dec 13 22:19:58 crc kubenswrapper[4866]: E1213 22:19:58.368645 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" containerName="installer" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.368680 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" containerName="installer" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.368765 4866 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.368793 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d450d44e-8219-4372-904c-6dfeb99953c2" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.368918 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e21a24-2f77-4e7a-a3ec-8c50a1822441" containerName="installer" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.369713 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.372342 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.372527 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.372766 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.373167 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.373175 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.373286 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.377728 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.378303 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.378510 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.379082 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.379535 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.379538 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.382481 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.387491 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.390488 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.390937 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.400859 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.402163 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.40214367 podStartE2EDuration="15.40214367s" podCreationTimestamp="2025-12-13 22:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:19:58.397780724 +0000 UTC m=+196.439119286" watchObservedRunningTime="2025-12-13 22:19:58.40214367 +0000 UTC m=+196.443482232" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.473289 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.476909 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.476998 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477143 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-error\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477186 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-session\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477243 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477288 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477314 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477365 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477391 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477449 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kr4\" (UniqueName: \"kubernetes.io/projected/7f236967-bc47-44a3-b005-899b5813e9f6-kube-api-access-b9kr4\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477477 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477500 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-login\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477525 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-audit-policies\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.477547 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f236967-bc47-44a3-b005-899b5813e9f6-audit-dir\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578509 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578566 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578607 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-error\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578641 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-session\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578694 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578883 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578915 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.578950 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.579640 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.579646 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.579771 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.579902 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kr4\" (UniqueName: \"kubernetes.io/projected/7f236967-bc47-44a3-b005-899b5813e9f6-kube-api-access-b9kr4\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.580001 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.580032 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.580131 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-login\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.580215 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-audit-policies\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.580257 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f236967-bc47-44a3-b005-899b5813e9f6-audit-dir\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.580334 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f236967-bc47-44a3-b005-899b5813e9f6-audit-dir\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.581034 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f236967-bc47-44a3-b005-899b5813e9f6-audit-policies\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.586598 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-error\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.586970 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.587098 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.587644 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-session\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.588083 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.590331 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-template-login\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.594577 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.595653 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f236967-bc47-44a3-b005-899b5813e9f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.605899 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kr4\" (UniqueName: \"kubernetes.io/projected/7f236967-bc47-44a3-b005-899b5813e9f6-kube-api-access-b9kr4\") pod \"oauth-openshift-7455d675f6-5drqt\" (UID: \"7f236967-bc47-44a3-b005-899b5813e9f6\") " pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.613793 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.626832 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.697369 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.716167 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.744183 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.749696 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.764623 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.797246 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 22:19:58 crc kubenswrapper[4866]: I1213 22:19:58.860958 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.047929 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.061017 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.138542 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.205516 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.366846 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.421149 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.468605 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.499444 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.588620 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.662710 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.699521 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.861843 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 22:19:59 crc kubenswrapper[4866]: I1213 22:19:59.970937 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.074269 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.229231 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.299713 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.348844 4866 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.398090 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.422847 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.630595 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.760578 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 22:20:00 crc kubenswrapper[4866]: I1213 22:20:00.826363 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.029684 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.080446 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.092461 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.134598 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7455d675f6-5drqt"] Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.195068 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.315160 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.330386 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.511137 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.541102 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 22:20:01 crc kubenswrapper[4866]: E1213 22:20:01.553177 4866 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 13 22:20:01 crc kubenswrapper[4866]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b" Netns:"/var/run/netns/195f7edb-6ac5-4fe5-95fd-7e1e2984d068" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod "oauth-openshift-7455d675f6-5drqt" not found Dec 13 22:20:01 crc kubenswrapper[4866]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 13 22:20:01 crc kubenswrapper[4866]: > Dec 13 22:20:01 crc kubenswrapper[4866]: E1213 22:20:01.553244 4866 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 13 22:20:01 crc kubenswrapper[4866]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b" Netns:"/var/run/netns/195f7edb-6ac5-4fe5-95fd-7e1e2984d068" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod "oauth-openshift-7455d675f6-5drqt" not found Dec 13 22:20:01 crc kubenswrapper[4866]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 13 22:20:01 crc kubenswrapper[4866]: > pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:01 crc kubenswrapper[4866]: E1213 22:20:01.553268 4866 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 13 22:20:01 crc kubenswrapper[4866]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b" Netns:"/var/run/netns/195f7edb-6ac5-4fe5-95fd-7e1e2984d068" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod "oauth-openshift-7455d675f6-5drqt" not found Dec 13 22:20:01 crc kubenswrapper[4866]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 13 22:20:01 crc kubenswrapper[4866]: > pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:01 crc kubenswrapper[4866]: E1213 22:20:01.553357 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-7455d675f6-5drqt_openshift-authentication(7f236967-bc47-44a3-b005-899b5813e9f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-7455d675f6-5drqt_openshift-authentication(7f236967-bc47-44a3-b005-899b5813e9f6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b\\\" Netns:\\\"/var/run/netns/195f7edb-6ac5-4fe5-95fd-7e1e2984d068\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=7f4c806ec26d70b2f0dfce693861247dd607efec3ea9e57392b2833f64f7eb6b;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod \\\"oauth-openshift-7455d675f6-5drqt\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" podUID="7f236967-bc47-44a3-b005-899b5813e9f6" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.593024 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.699358 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.711453 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.711965 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.730878 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.753927 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.833882 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.854467 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.908033 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 22:20:01 crc kubenswrapper[4866]: I1213 22:20:01.938482 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.003820 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.057400 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.062843 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.101869 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.122096 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.123212 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.245498 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.288332 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.344685 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.426459 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.628616 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.752572 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.791600 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.793112 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.890806 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.896791 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.908512 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.969941 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 22:20:02 crc kubenswrapper[4866]: I1213 22:20:02.991018 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.036434 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.036494 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.074462 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.096948 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.264445 4866 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.301392 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.327117 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.383804 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.413253 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.485528 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.490219 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.500783 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.533714 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.553031 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.557668 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.561256 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.579556 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.641842 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.656521 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.683902 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.706743 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.736493 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.807535 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 13 22:20:03 crc kubenswrapper[4866]: I1213 22:20:03.975272 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.010126 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.063198 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.081745 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.240901 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.286444 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.342989 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.355886 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.367664 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.421790 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.443129 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.487678 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.542847 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 22:20:04 crc kubenswrapper[4866]: E1213 22:20:04.589006 4866 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 13 22:20:04 crc kubenswrapper[4866]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a" Netns:"/var/run/netns/2854ced9-6107-462a-88e6-cec7e81f1c83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod "oauth-openshift-7455d675f6-5drqt" not found Dec 13 22:20:04 crc kubenswrapper[4866]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 13 22:20:04 crc kubenswrapper[4866]: > Dec 13 22:20:04 crc kubenswrapper[4866]: E1213 22:20:04.589081 4866 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 13 22:20:04 crc kubenswrapper[4866]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a" Netns:"/var/run/netns/2854ced9-6107-462a-88e6-cec7e81f1c83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod "oauth-openshift-7455d675f6-5drqt" not found Dec 13 22:20:04 crc kubenswrapper[4866]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 13 22:20:04 crc kubenswrapper[4866]: > pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:04 crc kubenswrapper[4866]: E1213 22:20:04.589101 4866 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 13 22:20:04 crc kubenswrapper[4866]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a" Netns:"/var/run/netns/2854ced9-6107-462a-88e6-cec7e81f1c83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod "oauth-openshift-7455d675f6-5drqt" not found Dec 13 22:20:04 crc kubenswrapper[4866]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 13 22:20:04 crc kubenswrapper[4866]: > pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:04 crc kubenswrapper[4866]: E1213 22:20:04.589155 4866 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-7455d675f6-5drqt_openshift-authentication(7f236967-bc47-44a3-b005-899b5813e9f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-7455d675f6-5drqt_openshift-authentication(7f236967-bc47-44a3-b005-899b5813e9f6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-7455d675f6-5drqt_openshift-authentication_7f236967-bc47-44a3-b005-899b5813e9f6_0(609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a): error adding pod openshift-authentication_oauth-openshift-7455d675f6-5drqt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a\\\" Netns:\\\"/var/run/netns/2854ced9-6107-462a-88e6-cec7e81f1c83\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-7455d675f6-5drqt;K8S_POD_INFRA_CONTAINER_ID=609469b8258f49878c12530e09a01fd18320682c3e42fa410389295d9c4e0a0a;K8S_POD_UID=7f236967-bc47-44a3-b005-899b5813e9f6\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-7455d675f6-5drqt] networking: Multus: [openshift-authentication/oauth-openshift-7455d675f6-5drqt/7f236967-bc47-44a3-b005-899b5813e9f6]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-7455d675f6-5drqt in out of cluster comm: pod \\\"oauth-openshift-7455d675f6-5drqt\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" podUID="7f236967-bc47-44a3-b005-899b5813e9f6" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.607337 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.658632 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.693958 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.781522 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.827227 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 13 22:20:04 crc kubenswrapper[4866]: I1213 22:20:04.950270 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.018518 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.024467 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.138669 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.176926 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.201260 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.325109 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.380692 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.392232 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.539284 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.589865 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.717267 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.734902 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 22:20:05 crc kubenswrapper[4866]: I1213 22:20:05.834276 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.005630 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.067328 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.071905 4866 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.072338 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd" gracePeriod=5 Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.097358 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.133793 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.203241 4866 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.269797 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.296292 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.307272 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.323116 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.352524 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.485259 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.509618 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.526659 4866 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.526706 4866 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.526748 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.527437 4866 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"4814e8ff8c50cebef0a8b09f4ac70005aca80213619aac0f4220b8d5d23d714d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.527533 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://4814e8ff8c50cebef0a8b09f4ac70005aca80213619aac0f4220b8d5d23d714d" gracePeriod=30 Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.568034 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.599894 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.609813 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.619820 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.663093 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.684417 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.744753 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.774316 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.798623 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.961124 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.969150 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.978105 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 13 22:20:06 crc kubenswrapper[4866]: I1213 22:20:06.985991 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.290857 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.291252 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.321418 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.477113 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.485737 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.563416 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.630782 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.671540 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.820876 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.831687 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.857581 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.942153 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.968709 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 22:20:07 crc kubenswrapper[4866]: I1213 22:20:07.977886 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.027191 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.101237 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.133680 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.301927 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.358920 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.379951 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.424226 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.439612 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.621323 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.726217 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.747477 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.778608 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.951994 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.954217 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 22:20:08 crc kubenswrapper[4866]: I1213 22:20:08.981692 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.044938 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.096285 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.179343 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.181023 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.198728 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.481544 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.622962 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.669216 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.724912 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.902430 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 13 22:20:09 crc kubenswrapper[4866]: I1213 22:20:09.914612 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.060961 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.116240 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.158075 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.242526 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.399820 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.821483 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 22:20:10 crc kubenswrapper[4866]: I1213 22:20:10.985551 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.209771 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.299539 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.580113 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.668624 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.668689 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.755518 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.755560 4866 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd" exitCode=137 Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.755596 4866 scope.go:117] "RemoveContainer" containerID="4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.755688 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.779712 4866 scope.go:117] "RemoveContainer" containerID="4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd" Dec 13 22:20:11 crc kubenswrapper[4866]: E1213 22:20:11.780240 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd\": container with ID starting with 4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd not found: ID does not exist" containerID="4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.780271 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd"} err="failed to get container status \"4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd\": rpc error: code = NotFound desc = could not find container \"4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd\": container with ID starting with 4740520e32043b4878647765b3204eb44c4d962c4443ba33304835169934b0dd not found: ID does not exist" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839082 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839124 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839160 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839191 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839230 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839468 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839498 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839515 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.839531 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.845840 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.941127 4866 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.941195 4866 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.941222 4866 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.941401 4866 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:11 crc kubenswrapper[4866]: I1213 22:20:11.941440 4866 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.220513 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.221086 4866 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.235690 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.235748 4866 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="7c3c1cf2-6fe4-4c28-bbc0-8e76c081c312" Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.241930 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.241991 4866 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="7c3c1cf2-6fe4-4c28-bbc0-8e76c081c312" Dec 13 22:20:12 crc kubenswrapper[4866]: I1213 22:20:12.960190 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 22:20:13 crc kubenswrapper[4866]: I1213 22:20:13.747138 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.212980 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.213695 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.458290 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7455d675f6-5drqt"] Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.822924 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" event={"ID":"7f236967-bc47-44a3-b005-899b5813e9f6","Type":"ContainerStarted","Data":"ed5dbbd70ae46375e34af6f6539d5c3c469e4750bbec184080765f01451982c2"} Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.822975 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" event={"ID":"7f236967-bc47-44a3-b005-899b5813e9f6","Type":"ContainerStarted","Data":"fd6781e33e3b8cb92716c8a2306e08e33967189a006f21b1ea0597fd8504c06c"} Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.824534 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:19 crc kubenswrapper[4866]: I1213 22:20:19.844243 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" podStartSLOduration=85.844222652 podStartE2EDuration="1m25.844222652s" podCreationTimestamp="2025-12-13 22:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:20:19.843777199 +0000 UTC m=+217.885115771" watchObservedRunningTime="2025-12-13 22:20:19.844222652 +0000 UTC m=+217.885561214" Dec 13 22:20:20 crc kubenswrapper[4866]: I1213 22:20:20.314537 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7455d675f6-5drqt" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.461064 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm"] Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.462787 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" podUID="73e28583-3224-4f65-a4c6-c1aee16deda8" containerName="route-controller-manager" containerID="cri-o://9509d1691d31367e6995aaec7ea97ecc3f7fd5ae366dbba40f67ee6b589438da" gracePeriod=30 Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.587916 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2688"] Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.588336 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" podUID="be954a83-6cf4-4f06-9de5-0540e967cfe9" containerName="controller-manager" containerID="cri-o://e38c008ded3b789f11f0a7bdf80d7ea252361bbd658f54813f58b3cee7183515" gracePeriod=30 Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.883830 4866 generic.go:334] "Generic (PLEG): container finished" podID="73e28583-3224-4f65-a4c6-c1aee16deda8" containerID="9509d1691d31367e6995aaec7ea97ecc3f7fd5ae366dbba40f67ee6b589438da" exitCode=0 Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.883909 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" event={"ID":"73e28583-3224-4f65-a4c6-c1aee16deda8","Type":"ContainerDied","Data":"9509d1691d31367e6995aaec7ea97ecc3f7fd5ae366dbba40f67ee6b589438da"} Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.885505 4866 generic.go:334] "Generic (PLEG): container finished" podID="be954a83-6cf4-4f06-9de5-0540e967cfe9" containerID="e38c008ded3b789f11f0a7bdf80d7ea252361bbd658f54813f58b3cee7183515" exitCode=0 Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.885543 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" event={"ID":"be954a83-6cf4-4f06-9de5-0540e967cfe9","Type":"ContainerDied","Data":"e38c008ded3b789f11f0a7bdf80d7ea252361bbd658f54813f58b3cee7183515"} Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.885573 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" event={"ID":"be954a83-6cf4-4f06-9de5-0540e967cfe9","Type":"ContainerDied","Data":"0b75677b4eb901a82df99ad1a62550b0f79d7473cbbfc24bfd2eb7f1d49092f8"} Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.885585 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b75677b4eb901a82df99ad1a62550b0f79d7473cbbfc24bfd2eb7f1d49092f8" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.904371 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.987119 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-config\") pod \"be954a83-6cf4-4f06-9de5-0540e967cfe9\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.987174 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-proxy-ca-bundles\") pod \"be954a83-6cf4-4f06-9de5-0540e967cfe9\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.987199 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn27t\" (UniqueName: \"kubernetes.io/projected/be954a83-6cf4-4f06-9de5-0540e967cfe9-kube-api-access-nn27t\") pod \"be954a83-6cf4-4f06-9de5-0540e967cfe9\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.987219 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-client-ca\") pod \"be954a83-6cf4-4f06-9de5-0540e967cfe9\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.987266 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be954a83-6cf4-4f06-9de5-0540e967cfe9-serving-cert\") pod \"be954a83-6cf4-4f06-9de5-0540e967cfe9\" (UID: \"be954a83-6cf4-4f06-9de5-0540e967cfe9\") " Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.988109 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-client-ca" (OuterVolumeSpecName: "client-ca") pod "be954a83-6cf4-4f06-9de5-0540e967cfe9" (UID: "be954a83-6cf4-4f06-9de5-0540e967cfe9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.988122 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be954a83-6cf4-4f06-9de5-0540e967cfe9" (UID: "be954a83-6cf4-4f06-9de5-0540e967cfe9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.989715 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-config" (OuterVolumeSpecName: "config") pod "be954a83-6cf4-4f06-9de5-0540e967cfe9" (UID: "be954a83-6cf4-4f06-9de5-0540e967cfe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.996428 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be954a83-6cf4-4f06-9de5-0540e967cfe9-kube-api-access-nn27t" (OuterVolumeSpecName: "kube-api-access-nn27t") pod "be954a83-6cf4-4f06-9de5-0540e967cfe9" (UID: "be954a83-6cf4-4f06-9de5-0540e967cfe9"). InnerVolumeSpecName "kube-api-access-nn27t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:20:30 crc kubenswrapper[4866]: I1213 22:20:30.997384 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be954a83-6cf4-4f06-9de5-0540e967cfe9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be954a83-6cf4-4f06-9de5-0540e967cfe9" (UID: "be954a83-6cf4-4f06-9de5-0540e967cfe9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.088089 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.088119 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn27t\" (UniqueName: \"kubernetes.io/projected/be954a83-6cf4-4f06-9de5-0540e967cfe9-kube-api-access-nn27t\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.088134 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.088144 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be954a83-6cf4-4f06-9de5-0540e967cfe9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.088155 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be954a83-6cf4-4f06-9de5-0540e967cfe9-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.260179 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.390563 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e28583-3224-4f65-a4c6-c1aee16deda8-serving-cert\") pod \"73e28583-3224-4f65-a4c6-c1aee16deda8\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.390615 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq28b\" (UniqueName: \"kubernetes.io/projected/73e28583-3224-4f65-a4c6-c1aee16deda8-kube-api-access-kq28b\") pod \"73e28583-3224-4f65-a4c6-c1aee16deda8\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.390669 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-client-ca\") pod \"73e28583-3224-4f65-a4c6-c1aee16deda8\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.390709 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-config\") pod \"73e28583-3224-4f65-a4c6-c1aee16deda8\" (UID: \"73e28583-3224-4f65-a4c6-c1aee16deda8\") " Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.391361 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-client-ca" (OuterVolumeSpecName: "client-ca") pod "73e28583-3224-4f65-a4c6-c1aee16deda8" (UID: "73e28583-3224-4f65-a4c6-c1aee16deda8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.391382 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-config" (OuterVolumeSpecName: "config") pod "73e28583-3224-4f65-a4c6-c1aee16deda8" (UID: "73e28583-3224-4f65-a4c6-c1aee16deda8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.391774 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.391788 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73e28583-3224-4f65-a4c6-c1aee16deda8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.395075 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e28583-3224-4f65-a4c6-c1aee16deda8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "73e28583-3224-4f65-a4c6-c1aee16deda8" (UID: "73e28583-3224-4f65-a4c6-c1aee16deda8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.395436 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e28583-3224-4f65-a4c6-c1aee16deda8-kube-api-access-kq28b" (OuterVolumeSpecName: "kube-api-access-kq28b") pod "73e28583-3224-4f65-a4c6-c1aee16deda8" (UID: "73e28583-3224-4f65-a4c6-c1aee16deda8"). InnerVolumeSpecName "kube-api-access-kq28b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.492839 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e28583-3224-4f65-a4c6-c1aee16deda8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.492873 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq28b\" (UniqueName: \"kubernetes.io/projected/73e28583-3224-4f65-a4c6-c1aee16deda8-kube-api-access-kq28b\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815108 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn"] Dec 13 22:20:31 crc kubenswrapper[4866]: E1213 22:20:31.815375 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815392 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 22:20:31 crc kubenswrapper[4866]: E1213 22:20:31.815403 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be954a83-6cf4-4f06-9de5-0540e967cfe9" containerName="controller-manager" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815410 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="be954a83-6cf4-4f06-9de5-0540e967cfe9" containerName="controller-manager" Dec 13 22:20:31 crc kubenswrapper[4866]: E1213 22:20:31.815423 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e28583-3224-4f65-a4c6-c1aee16deda8" containerName="route-controller-manager" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815432 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e28583-3224-4f65-a4c6-c1aee16deda8" containerName="route-controller-manager" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815580 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815601 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e28583-3224-4f65-a4c6-c1aee16deda8" containerName="route-controller-manager" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.815610 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="be954a83-6cf4-4f06-9de5-0540e967cfe9" containerName="controller-manager" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.816093 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.818899 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-htp9x"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.819955 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.825934 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.830764 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-htp9x"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.891502 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2688" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.891741 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.893104 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm" event={"ID":"73e28583-3224-4f65-a4c6-c1aee16deda8","Type":"ContainerDied","Data":"95a1540e964c2fd6941dccf15bbb9eba16c66117052d07b800a2f832e9126ff0"} Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.893213 4866 scope.go:117] "RemoveContainer" containerID="9509d1691d31367e6995aaec7ea97ecc3f7fd5ae366dbba40f67ee6b589438da" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.897828 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-client-ca\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.897956 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-serving-cert\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898151 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kr98\" (UniqueName: \"kubernetes.io/projected/89ab1c2b-6887-45ce-a47e-29b8610c9841-kube-api-access-2kr98\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898248 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-proxy-ca-bundles\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898339 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pd54\" (UniqueName: \"kubernetes.io/projected/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-kube-api-access-5pd54\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898443 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-client-ca\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898553 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-config\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898674 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ab1c2b-6887-45ce-a47e-29b8610c9841-serving-cert\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.898796 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-config\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.929801 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2688"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.932503 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2688"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.937989 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.947420 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-69pzm"] Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.999552 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-client-ca\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:31 crc kubenswrapper[4866]: I1213 22:20:31.999873 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-config\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.000571 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-client-ca\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001211 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ab1c2b-6887-45ce-a47e-29b8610c9841-serving-cert\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001307 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-config\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001365 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-client-ca\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001437 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-serving-cert\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001485 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kr98\" (UniqueName: \"kubernetes.io/projected/89ab1c2b-6887-45ce-a47e-29b8610c9841-kube-api-access-2kr98\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001522 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-proxy-ca-bundles\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.001562 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pd54\" (UniqueName: \"kubernetes.io/projected/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-kube-api-access-5pd54\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.002226 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-config\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.002702 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-client-ca\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.002952 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-config\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.002987 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-proxy-ca-bundles\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.006130 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-serving-cert\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.007667 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ab1c2b-6887-45ce-a47e-29b8610c9841-serving-cert\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.020655 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pd54\" (UniqueName: \"kubernetes.io/projected/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-kube-api-access-5pd54\") pod \"route-controller-manager-6db8d698b7-pqxjn\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.021486 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kr98\" (UniqueName: \"kubernetes.io/projected/89ab1c2b-6887-45ce-a47e-29b8610c9841-kube-api-access-2kr98\") pod \"controller-manager-7f87cc8767-htp9x\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.096325 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-htp9x"] Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.096721 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.134377 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.136599 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn"] Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.228200 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e28583-3224-4f65-a4c6-c1aee16deda8" path="/var/lib/kubelet/pods/73e28583-3224-4f65-a4c6-c1aee16deda8/volumes" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.229020 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be954a83-6cf4-4f06-9de5-0540e967cfe9" path="/var/lib/kubelet/pods/be954a83-6cf4-4f06-9de5-0540e967cfe9/volumes" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.349896 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-htp9x"] Dec 13 22:20:32 crc kubenswrapper[4866]: W1213 22:20:32.358587 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89ab1c2b_6887_45ce_a47e_29b8610c9841.slice/crio-9a86fe9bb1f35bd1239631701921a4b72a5dfb1fcbcb4baad63613d9f264854a WatchSource:0}: Error finding container 9a86fe9bb1f35bd1239631701921a4b72a5dfb1fcbcb4baad63613d9f264854a: Status 404 returned error can't find the container with id 9a86fe9bb1f35bd1239631701921a4b72a5dfb1fcbcb4baad63613d9f264854a Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.399579 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn"] Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.898620 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" event={"ID":"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92","Type":"ContainerStarted","Data":"71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2"} Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.898906 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" event={"ID":"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92","Type":"ContainerStarted","Data":"69b97722b77fa1b730165794f38a31a891edf24d2a015e6363931f30b0061916"} Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.898926 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.898710 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" podUID="7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" containerName="route-controller-manager" containerID="cri-o://71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2" gracePeriod=30 Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.900639 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" event={"ID":"89ab1c2b-6887-45ce-a47e-29b8610c9841","Type":"ContainerStarted","Data":"6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5"} Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.900663 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" event={"ID":"89ab1c2b-6887-45ce-a47e-29b8610c9841","Type":"ContainerStarted","Data":"9a86fe9bb1f35bd1239631701921a4b72a5dfb1fcbcb4baad63613d9f264854a"} Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.900794 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" podUID="89ab1c2b-6887-45ce-a47e-29b8610c9841" containerName="controller-manager" containerID="cri-o://6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5" gracePeriod=30 Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.900898 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.905627 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.909665 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:32 crc kubenswrapper[4866]: I1213 22:20:32.922992 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" podStartSLOduration=2.922976995 podStartE2EDuration="2.922976995s" podCreationTimestamp="2025-12-13 22:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:20:32.920083676 +0000 UTC m=+230.961422228" watchObservedRunningTime="2025-12-13 22:20:32.922976995 +0000 UTC m=+230.964315547" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.006757 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" podStartSLOduration=3.006739137 podStartE2EDuration="3.006739137s" podCreationTimestamp="2025-12-13 22:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:20:32.962445526 +0000 UTC m=+231.003784078" watchObservedRunningTime="2025-12-13 22:20:33.006739137 +0000 UTC m=+231.048077689" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.039398 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.039447 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.039502 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.040065 4866 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176"} pod="openshift-machine-config-operator/machine-config-daemon-2855n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.040112 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" containerID="cri-o://26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176" gracePeriod=600 Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.327013 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.328653 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pd54\" (UniqueName: \"kubernetes.io/projected/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-kube-api-access-5pd54\") pod \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.328734 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-client-ca\") pod \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.328758 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-config\") pod \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.328802 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-serving-cert\") pod \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\" (UID: \"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.330596 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" (UID: "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.331076 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-config" (OuterVolumeSpecName: "config") pod "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" (UID: "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.335248 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-kube-api-access-5pd54" (OuterVolumeSpecName: "kube-api-access-5pd54") pod "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" (UID: "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92"). InnerVolumeSpecName "kube-api-access-5pd54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.335280 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" (UID: "7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.354600 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.358691 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl"] Dec 13 22:20:33 crc kubenswrapper[4866]: E1213 22:20:33.358895 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" containerName="route-controller-manager" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.358906 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" containerName="route-controller-manager" Dec 13 22:20:33 crc kubenswrapper[4866]: E1213 22:20:33.358926 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ab1c2b-6887-45ce-a47e-29b8610c9841" containerName="controller-manager" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.358933 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ab1c2b-6887-45ce-a47e-29b8610c9841" containerName="controller-manager" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.359018 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" containerName="route-controller-manager" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.359033 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ab1c2b-6887-45ce-a47e-29b8610c9841" containerName="controller-manager" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.359407 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.369906 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl"] Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.429978 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.430024 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.430036 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.430089 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pd54\" (UniqueName: \"kubernetes.io/projected/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92-kube-api-access-5pd54\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531115 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-client-ca\") pod \"89ab1c2b-6887-45ce-a47e-29b8610c9841\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531190 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-config\") pod \"89ab1c2b-6887-45ce-a47e-29b8610c9841\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531262 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kr98\" (UniqueName: \"kubernetes.io/projected/89ab1c2b-6887-45ce-a47e-29b8610c9841-kube-api-access-2kr98\") pod \"89ab1c2b-6887-45ce-a47e-29b8610c9841\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531299 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-proxy-ca-bundles\") pod \"89ab1c2b-6887-45ce-a47e-29b8610c9841\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531333 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ab1c2b-6887-45ce-a47e-29b8610c9841-serving-cert\") pod \"89ab1c2b-6887-45ce-a47e-29b8610c9841\" (UID: \"89ab1c2b-6887-45ce-a47e-29b8610c9841\") " Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531467 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf19516e-e35b-469c-8eb6-78f6302d60a8-serving-cert\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531569 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcq4f\" (UniqueName: \"kubernetes.io/projected/bf19516e-e35b-469c-8eb6-78f6302d60a8-kube-api-access-xcq4f\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531609 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-client-ca\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531635 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-config\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531728 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-client-ca" (OuterVolumeSpecName: "client-ca") pod "89ab1c2b-6887-45ce-a47e-29b8610c9841" (UID: "89ab1c2b-6887-45ce-a47e-29b8610c9841"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531794 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-config" (OuterVolumeSpecName: "config") pod "89ab1c2b-6887-45ce-a47e-29b8610c9841" (UID: "89ab1c2b-6887-45ce-a47e-29b8610c9841"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.531884 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "89ab1c2b-6887-45ce-a47e-29b8610c9841" (UID: "89ab1c2b-6887-45ce-a47e-29b8610c9841"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.535001 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab1c2b-6887-45ce-a47e-29b8610c9841-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89ab1c2b-6887-45ce-a47e-29b8610c9841" (UID: "89ab1c2b-6887-45ce-a47e-29b8610c9841"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.535125 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ab1c2b-6887-45ce-a47e-29b8610c9841-kube-api-access-2kr98" (OuterVolumeSpecName: "kube-api-access-2kr98") pod "89ab1c2b-6887-45ce-a47e-29b8610c9841" (UID: "89ab1c2b-6887-45ce-a47e-29b8610c9841"). InnerVolumeSpecName "kube-api-access-2kr98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632379 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcq4f\" (UniqueName: \"kubernetes.io/projected/bf19516e-e35b-469c-8eb6-78f6302d60a8-kube-api-access-xcq4f\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632587 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-client-ca\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632612 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-config\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632636 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf19516e-e35b-469c-8eb6-78f6302d60a8-serving-cert\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632679 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kr98\" (UniqueName: \"kubernetes.io/projected/89ab1c2b-6887-45ce-a47e-29b8610c9841-kube-api-access-2kr98\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632688 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632698 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ab1c2b-6887-45ce-a47e-29b8610c9841-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632707 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.632717 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ab1c2b-6887-45ce-a47e-29b8610c9841-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.633607 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-client-ca\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.634252 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-config\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.637032 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf19516e-e35b-469c-8eb6-78f6302d60a8-serving-cert\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.650808 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcq4f\" (UniqueName: \"kubernetes.io/projected/bf19516e-e35b-469c-8eb6-78f6302d60a8-kube-api-access-xcq4f\") pod \"route-controller-manager-986db4786-sbdcl\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.671443 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.828295 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl"] Dec 13 22:20:33 crc kubenswrapper[4866]: W1213 22:20:33.832978 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf19516e_e35b_469c_8eb6_78f6302d60a8.slice/crio-0d30d4ea12de2f11466624130655cdf33dcb15af48d2a8ba4207b4826c736e2c WatchSource:0}: Error finding container 0d30d4ea12de2f11466624130655cdf33dcb15af48d2a8ba4207b4826c736e2c: Status 404 returned error can't find the container with id 0d30d4ea12de2f11466624130655cdf33dcb15af48d2a8ba4207b4826c736e2c Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.907166 4866 generic.go:334] "Generic (PLEG): container finished" podID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerID="26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176" exitCode=0 Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.907248 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerDied","Data":"26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.907278 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"ccf6d32a3a2c377b5e2390750299497b5183e20a129ad47bea0ebd1b2415fb34"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.908685 4866 generic.go:334] "Generic (PLEG): container finished" podID="7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" containerID="71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2" exitCode=0 Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.908754 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" event={"ID":"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92","Type":"ContainerDied","Data":"71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.908782 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" event={"ID":"7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92","Type":"ContainerDied","Data":"69b97722b77fa1b730165794f38a31a891edf24d2a015e6363931f30b0061916"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.908823 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.908847 4866 scope.go:117] "RemoveContainer" containerID="71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.912331 4866 generic.go:334] "Generic (PLEG): container finished" podID="89ab1c2b-6887-45ce-a47e-29b8610c9841" containerID="6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5" exitCode=0 Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.912371 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.912391 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" event={"ID":"89ab1c2b-6887-45ce-a47e-29b8610c9841","Type":"ContainerDied","Data":"6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.912421 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f87cc8767-htp9x" event={"ID":"89ab1c2b-6887-45ce-a47e-29b8610c9841","Type":"ContainerDied","Data":"9a86fe9bb1f35bd1239631701921a4b72a5dfb1fcbcb4baad63613d9f264854a"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.913670 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" event={"ID":"bf19516e-e35b-469c-8eb6-78f6302d60a8","Type":"ContainerStarted","Data":"0d30d4ea12de2f11466624130655cdf33dcb15af48d2a8ba4207b4826c736e2c"} Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.933216 4866 scope.go:117] "RemoveContainer" containerID="71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2" Dec 13 22:20:33 crc kubenswrapper[4866]: E1213 22:20:33.933688 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2\": container with ID starting with 71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2 not found: ID does not exist" containerID="71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.933740 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2"} err="failed to get container status \"71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2\": rpc error: code = NotFound desc = could not find container \"71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2\": container with ID starting with 71f3736e1ca148e98a7d006bb3e14fdc5b97692745b3f6356249dc9d88061ac2 not found: ID does not exist" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.933771 4866 scope.go:117] "RemoveContainer" containerID="6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.962662 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn"] Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.968227 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-pqxjn"] Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.968556 4866 scope.go:117] "RemoveContainer" containerID="6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5" Dec 13 22:20:33 crc kubenswrapper[4866]: E1213 22:20:33.968862 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5\": container with ID starting with 6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5 not found: ID does not exist" containerID="6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.968902 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5"} err="failed to get container status \"6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5\": rpc error: code = NotFound desc = could not find container \"6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5\": container with ID starting with 6a19d718e16a7d29726d0e30fa2dc940b8737383a77f197dcffc7f0e6a5d50e5 not found: ID does not exist" Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.974093 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-htp9x"] Dec 13 22:20:33 crc kubenswrapper[4866]: I1213 22:20:33.977773 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-htp9x"] Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.026081 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9rw56"] Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.026295 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9rw56" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="registry-server" containerID="cri-o://be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3" gracePeriod=2 Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.219042 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92" path="/var/lib/kubelet/pods/7f0b3dbe-8cd1-4c6a-a56f-194935ca7f92/volumes" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.219597 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ab1c2b-6887-45ce-a47e-29b8610c9841" path="/var/lib/kubelet/pods/89ab1c2b-6887-45ce-a47e-29b8610c9841/volumes" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.405753 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.441374 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6km9\" (UniqueName: \"kubernetes.io/projected/b87d7d9b-aff1-45c9-8824-35fe7442cc07-kube-api-access-w6km9\") pod \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.441437 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-utilities\") pod \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.441459 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-catalog-content\") pod \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\" (UID: \"b87d7d9b-aff1-45c9-8824-35fe7442cc07\") " Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.442632 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-utilities" (OuterVolumeSpecName: "utilities") pod "b87d7d9b-aff1-45c9-8824-35fe7442cc07" (UID: "b87d7d9b-aff1-45c9-8824-35fe7442cc07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.447009 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87d7d9b-aff1-45c9-8824-35fe7442cc07-kube-api-access-w6km9" (OuterVolumeSpecName: "kube-api-access-w6km9") pod "b87d7d9b-aff1-45c9-8824-35fe7442cc07" (UID: "b87d7d9b-aff1-45c9-8824-35fe7442cc07"). InnerVolumeSpecName "kube-api-access-w6km9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.542448 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6km9\" (UniqueName: \"kubernetes.io/projected/b87d7d9b-aff1-45c9-8824-35fe7442cc07-kube-api-access-w6km9\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.542487 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.551377 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b87d7d9b-aff1-45c9-8824-35fe7442cc07" (UID: "b87d7d9b-aff1-45c9-8824-35fe7442cc07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.643709 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87d7d9b-aff1-45c9-8824-35fe7442cc07-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.921490 4866 generic.go:334] "Generic (PLEG): container finished" podID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerID="be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3" exitCode=0 Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.921548 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rw56" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.921564 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerDied","Data":"be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3"} Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.921919 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rw56" event={"ID":"b87d7d9b-aff1-45c9-8824-35fe7442cc07","Type":"ContainerDied","Data":"509f72226cd4936a771e43a368ed311532cc4d1b2c63c04cbdd8e9043561f499"} Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.921939 4866 scope.go:117] "RemoveContainer" containerID="be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.925551 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" event={"ID":"bf19516e-e35b-469c-8eb6-78f6302d60a8","Type":"ContainerStarted","Data":"f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51"} Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.926349 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.934514 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.948230 4866 scope.go:117] "RemoveContainer" containerID="00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.959559 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" podStartSLOduration=2.9595424120000002 podStartE2EDuration="2.959542412s" podCreationTimestamp="2025-12-13 22:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:20:34.952886711 +0000 UTC m=+232.994225283" watchObservedRunningTime="2025-12-13 22:20:34.959542412 +0000 UTC m=+233.000880964" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.993000 4866 scope.go:117] "RemoveContainer" containerID="0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec" Dec 13 22:20:34 crc kubenswrapper[4866]: I1213 22:20:34.997306 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9rw56"] Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.006531 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9rw56"] Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.020665 4866 scope.go:117] "RemoveContainer" containerID="be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3" Dec 13 22:20:35 crc kubenswrapper[4866]: E1213 22:20:35.022628 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3\": container with ID starting with be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3 not found: ID does not exist" containerID="be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.022686 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3"} err="failed to get container status \"be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3\": rpc error: code = NotFound desc = could not find container \"be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3\": container with ID starting with be2646684a97bff4eb5771338cc8a629d0ac83adc81e965ad1e9555ab58fc9b3 not found: ID does not exist" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.022719 4866 scope.go:117] "RemoveContainer" containerID="00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9" Dec 13 22:20:35 crc kubenswrapper[4866]: E1213 22:20:35.025318 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9\": container with ID starting with 00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9 not found: ID does not exist" containerID="00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.025377 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9"} err="failed to get container status \"00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9\": rpc error: code = NotFound desc = could not find container \"00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9\": container with ID starting with 00987c67779a0234fdaecc7d38719874ae4db926898ff6334f31173742c954e9 not found: ID does not exist" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.025424 4866 scope.go:117] "RemoveContainer" containerID="0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec" Dec 13 22:20:35 crc kubenswrapper[4866]: E1213 22:20:35.027876 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec\": container with ID starting with 0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec not found: ID does not exist" containerID="0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.027913 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec"} err="failed to get container status \"0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec\": rpc error: code = NotFound desc = could not find container \"0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec\": container with ID starting with 0d1f9b0e4c7f0e39fcb9b19e3f31c32cd8ab36e75c8088064e778d63a039d6ec not found: ID does not exist" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.817380 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-558d65cfdd-5wj99"] Dec 13 22:20:35 crc kubenswrapper[4866]: E1213 22:20:35.817564 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="registry-server" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.817574 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="registry-server" Dec 13 22:20:35 crc kubenswrapper[4866]: E1213 22:20:35.817583 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="extract-content" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.817590 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="extract-content" Dec 13 22:20:35 crc kubenswrapper[4866]: E1213 22:20:35.817602 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="extract-utilities" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.817608 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="extract-utilities" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.817713 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" containerName="registry-server" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.818069 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.819523 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.820941 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.821269 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.823114 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.828395 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.829725 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.831034 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558d65cfdd-5wj99"] Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.833433 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.962204 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-client-ca\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.962266 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-proxy-ca-bundles\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.962417 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8092b32f-85f8-47f5-8298-6319f103cf7a-serving-cert\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.962455 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-config\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:35 crc kubenswrapper[4866]: I1213 22:20:35.962521 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd4h4\" (UniqueName: \"kubernetes.io/projected/8092b32f-85f8-47f5-8298-6319f103cf7a-kube-api-access-vd4h4\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.063509 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd4h4\" (UniqueName: \"kubernetes.io/projected/8092b32f-85f8-47f5-8298-6319f103cf7a-kube-api-access-vd4h4\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.063593 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-client-ca\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.063660 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-proxy-ca-bundles\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.063714 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8092b32f-85f8-47f5-8298-6319f103cf7a-serving-cert\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.063736 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-config\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.064856 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-proxy-ca-bundles\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.065162 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-config\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.065439 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-client-ca\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.079126 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8092b32f-85f8-47f5-8298-6319f103cf7a-serving-cert\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.081348 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd4h4\" (UniqueName: \"kubernetes.io/projected/8092b32f-85f8-47f5-8298-6319f103cf7a-kube-api-access-vd4h4\") pod \"controller-manager-558d65cfdd-5wj99\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.139341 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.223378 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87d7d9b-aff1-45c9-8824-35fe7442cc07" path="/var/lib/kubelet/pods/b87d7d9b-aff1-45c9-8824-35fe7442cc07/volumes" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.306731 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558d65cfdd-5wj99"] Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.937865 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.939425 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.939467 4866 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4814e8ff8c50cebef0a8b09f4ac70005aca80213619aac0f4220b8d5d23d714d" exitCode=137 Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.939528 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4814e8ff8c50cebef0a8b09f4ac70005aca80213619aac0f4220b8d5d23d714d"} Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.939560 4866 scope.go:117] "RemoveContainer" containerID="2c4d57b379add2c2a64fb7e84b381674ac05488e23d7687ed9e1456d8d35c3f5" Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.941031 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" event={"ID":"8092b32f-85f8-47f5-8298-6319f103cf7a","Type":"ContainerStarted","Data":"61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65"} Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.941089 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" event={"ID":"8092b32f-85f8-47f5-8298-6319f103cf7a","Type":"ContainerStarted","Data":"5776164ccd19813fcc79cbaaa03e87312727b91be02b080c83edf715e5d4f218"} Dec 13 22:20:36 crc kubenswrapper[4866]: I1213 22:20:36.956379 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" podStartSLOduration=4.956360001 podStartE2EDuration="4.956360001s" podCreationTimestamp="2025-12-13 22:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:20:36.954260254 +0000 UTC m=+234.995598806" watchObservedRunningTime="2025-12-13 22:20:36.956360001 +0000 UTC m=+234.997698553" Dec 13 22:20:37 crc kubenswrapper[4866]: I1213 22:20:37.947130 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 13 22:20:37 crc kubenswrapper[4866]: I1213 22:20:37.948548 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c069babbea6cddfbf974132a958aaccf089e88096064c37c52d5f4fb0273b99e"} Dec 13 22:20:37 crc kubenswrapper[4866]: I1213 22:20:37.949405 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:37 crc kubenswrapper[4866]: I1213 22:20:37.953776 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:20:41 crc kubenswrapper[4866]: I1213 22:20:41.460236 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:20:46 crc kubenswrapper[4866]: I1213 22:20:46.526445 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:20:46 crc kubenswrapper[4866]: I1213 22:20:46.531350 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:20:47 crc kubenswrapper[4866]: I1213 22:20:47.008419 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.039164 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-22h7b"] Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.040222 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.061371 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-22h7b"] Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210684 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-bound-sa-token\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210733 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7tn\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-kube-api-access-cf7tn\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210764 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d76ef71a-21c2-4304-bbbe-db343775dfd5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210787 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-registry-tls\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210830 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210867 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d76ef71a-21c2-4304-bbbe-db343775dfd5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210908 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d76ef71a-21c2-4304-bbbe-db343775dfd5-trusted-ca\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.210943 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d76ef71a-21c2-4304-bbbe-db343775dfd5-registry-certificates\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.282782 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.311846 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d76ef71a-21c2-4304-bbbe-db343775dfd5-registry-certificates\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.311907 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-bound-sa-token\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.311927 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7tn\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-kube-api-access-cf7tn\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.311949 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d76ef71a-21c2-4304-bbbe-db343775dfd5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.311968 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-registry-tls\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.312010 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d76ef71a-21c2-4304-bbbe-db343775dfd5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.312039 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d76ef71a-21c2-4304-bbbe-db343775dfd5-trusted-ca\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.312790 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d76ef71a-21c2-4304-bbbe-db343775dfd5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.313171 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d76ef71a-21c2-4304-bbbe-db343775dfd5-registry-certificates\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.313562 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d76ef71a-21c2-4304-bbbe-db343775dfd5-trusted-ca\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.318685 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-registry-tls\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.320537 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d76ef71a-21c2-4304-bbbe-db343775dfd5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.352945 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7tn\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-kube-api-access-cf7tn\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.357589 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d76ef71a-21c2-4304-bbbe-db343775dfd5-bound-sa-token\") pod \"image-registry-66df7c8f76-22h7b\" (UID: \"d76ef71a-21c2-4304-bbbe-db343775dfd5\") " pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.358507 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:55 crc kubenswrapper[4866]: I1213 22:20:55.751682 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-22h7b"] Dec 13 22:20:56 crc kubenswrapper[4866]: I1213 22:20:56.053878 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" event={"ID":"d76ef71a-21c2-4304-bbbe-db343775dfd5","Type":"ContainerStarted","Data":"26fe3a5b1d856d99c0b64c50a8fb2b4f1979d1450f1a9e369fa0071d2b9e1342"} Dec 13 22:20:57 crc kubenswrapper[4866]: I1213 22:20:57.059159 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" event={"ID":"d76ef71a-21c2-4304-bbbe-db343775dfd5","Type":"ContainerStarted","Data":"86981a5199a9e5c0970c5ee43dd91ac83e94f81eb4e19b79ad7786b236b2d46e"} Dec 13 22:20:57 crc kubenswrapper[4866]: I1213 22:20:57.059579 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:20:57 crc kubenswrapper[4866]: I1213 22:20:57.083446 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" podStartSLOduration=2.083426393 podStartE2EDuration="2.083426393s" podCreationTimestamp="2025-12-13 22:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:20:57.078610372 +0000 UTC m=+255.119948914" watchObservedRunningTime="2025-12-13 22:20:57.083426393 +0000 UTC m=+255.124764945" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.284033 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl"] Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.284817 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" podUID="bf19516e-e35b-469c-8eb6-78f6302d60a8" containerName="route-controller-manager" containerID="cri-o://f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51" gracePeriod=30 Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.684465 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.746564 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcq4f\" (UniqueName: \"kubernetes.io/projected/bf19516e-e35b-469c-8eb6-78f6302d60a8-kube-api-access-xcq4f\") pod \"bf19516e-e35b-469c-8eb6-78f6302d60a8\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.753266 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf19516e-e35b-469c-8eb6-78f6302d60a8-kube-api-access-xcq4f" (OuterVolumeSpecName: "kube-api-access-xcq4f") pod "bf19516e-e35b-469c-8eb6-78f6302d60a8" (UID: "bf19516e-e35b-469c-8eb6-78f6302d60a8"). InnerVolumeSpecName "kube-api-access-xcq4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.847715 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-client-ca\") pod \"bf19516e-e35b-469c-8eb6-78f6302d60a8\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.847846 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf19516e-e35b-469c-8eb6-78f6302d60a8-serving-cert\") pod \"bf19516e-e35b-469c-8eb6-78f6302d60a8\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.847873 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-config\") pod \"bf19516e-e35b-469c-8eb6-78f6302d60a8\" (UID: \"bf19516e-e35b-469c-8eb6-78f6302d60a8\") " Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.848420 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf19516e-e35b-469c-8eb6-78f6302d60a8" (UID: "bf19516e-e35b-469c-8eb6-78f6302d60a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.848603 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-config" (OuterVolumeSpecName: "config") pod "bf19516e-e35b-469c-8eb6-78f6302d60a8" (UID: "bf19516e-e35b-469c-8eb6-78f6302d60a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.848750 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.848767 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcq4f\" (UniqueName: \"kubernetes.io/projected/bf19516e-e35b-469c-8eb6-78f6302d60a8-kube-api-access-xcq4f\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.848779 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf19516e-e35b-469c-8eb6-78f6302d60a8-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.855350 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf19516e-e35b-469c-8eb6-78f6302d60a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf19516e-e35b-469c-8eb6-78f6302d60a8" (UID: "bf19516e-e35b-469c-8eb6-78f6302d60a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:21:10 crc kubenswrapper[4866]: I1213 22:21:10.949475 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf19516e-e35b-469c-8eb6-78f6302d60a8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.125241 4866 generic.go:334] "Generic (PLEG): container finished" podID="bf19516e-e35b-469c-8eb6-78f6302d60a8" containerID="f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51" exitCode=0 Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.125280 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" event={"ID":"bf19516e-e35b-469c-8eb6-78f6302d60a8","Type":"ContainerDied","Data":"f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51"} Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.125303 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" event={"ID":"bf19516e-e35b-469c-8eb6-78f6302d60a8","Type":"ContainerDied","Data":"0d30d4ea12de2f11466624130655cdf33dcb15af48d2a8ba4207b4826c736e2c"} Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.125319 4866 scope.go:117] "RemoveContainer" containerID="f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.125412 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.150067 4866 scope.go:117] "RemoveContainer" containerID="f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51" Dec 13 22:21:11 crc kubenswrapper[4866]: E1213 22:21:11.150707 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51\": container with ID starting with f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51 not found: ID does not exist" containerID="f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.150748 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51"} err="failed to get container status \"f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51\": rpc error: code = NotFound desc = could not find container \"f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51\": container with ID starting with f28a1a05555ea7fb8f0c8db1bc444300bf636a88c46b21fbd3efef8b61ff3b51 not found: ID does not exist" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.162120 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl"] Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.165145 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-986db4786-sbdcl"] Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.842225 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl"] Dec 13 22:21:11 crc kubenswrapper[4866]: E1213 22:21:11.842679 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf19516e-e35b-469c-8eb6-78f6302d60a8" containerName="route-controller-manager" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.842690 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf19516e-e35b-469c-8eb6-78f6302d60a8" containerName="route-controller-manager" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.842774 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf19516e-e35b-469c-8eb6-78f6302d60a8" containerName="route-controller-manager" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.843136 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.845911 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.845975 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.846254 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.846441 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.845925 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.846776 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.854552 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl"] Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.978318 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-serving-cert\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.978374 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9xk\" (UniqueName: \"kubernetes.io/projected/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-kube-api-access-pt9xk\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.978404 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-client-ca\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:11 crc kubenswrapper[4866]: I1213 22:21:11.978438 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-config\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.079587 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-serving-cert\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.080179 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt9xk\" (UniqueName: \"kubernetes.io/projected/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-kube-api-access-pt9xk\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.080400 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-client-ca\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.082941 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-config\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.082740 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-client-ca\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.085828 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-serving-cert\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.086579 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-config\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.101452 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt9xk\" (UniqueName: \"kubernetes.io/projected/bcb7ac92-e606-45f0-aa3c-9b6ef14537cb-kube-api-access-pt9xk\") pod \"route-controller-manager-6db8d698b7-xfzjl\" (UID: \"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb\") " pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.191246 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.220664 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf19516e-e35b-469c-8eb6-78f6302d60a8" path="/var/lib/kubelet/pods/bf19516e-e35b-469c-8eb6-78f6302d60a8/volumes" Dec 13 22:21:12 crc kubenswrapper[4866]: I1213 22:21:12.580994 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl"] Dec 13 22:21:13 crc kubenswrapper[4866]: I1213 22:21:13.141390 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" event={"ID":"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb","Type":"ContainerStarted","Data":"661842dbab523b88dc95167f2384acd764ade2e33f3821905ab386586d71d875"} Dec 13 22:21:13 crc kubenswrapper[4866]: I1213 22:21:13.141805 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" event={"ID":"bcb7ac92-e606-45f0-aa3c-9b6ef14537cb","Type":"ContainerStarted","Data":"1119f64557aab8a9d29f2a37fad80b58ae621c1ad58032ef6e21e75bc213a174"} Dec 13 22:21:13 crc kubenswrapper[4866]: I1213 22:21:13.141829 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:13 crc kubenswrapper[4866]: I1213 22:21:13.162845 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" podStartSLOduration=3.162821531 podStartE2EDuration="3.162821531s" podCreationTimestamp="2025-12-13 22:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:21:13.157320227 +0000 UTC m=+271.198658789" watchObservedRunningTime="2025-12-13 22:21:13.162821531 +0000 UTC m=+271.204160093" Dec 13 22:21:13 crc kubenswrapper[4866]: I1213 22:21:13.256980 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6db8d698b7-xfzjl" Dec 13 22:21:15 crc kubenswrapper[4866]: I1213 22:21:15.364438 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-22h7b" Dec 13 22:21:15 crc kubenswrapper[4866]: I1213 22:21:15.436490 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wzf86"] Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.482331 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" podUID="6fe29c2e-47b8-434d-a38f-8edd2992e345" containerName="registry" containerID="cri-o://0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f" gracePeriod=30 Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.840280 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984223 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-bound-sa-token\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984351 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6fe29c2e-47b8-434d-a38f-8edd2992e345-installation-pull-secrets\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984422 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-tls\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984456 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-certificates\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984498 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6fe29c2e-47b8-434d-a38f-8edd2992e345-ca-trust-extracted\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984523 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dpwh\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-kube-api-access-5dpwh\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984587 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-trusted-ca\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.984762 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6fe29c2e-47b8-434d-a38f-8edd2992e345\" (UID: \"6fe29c2e-47b8-434d-a38f-8edd2992e345\") " Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.985857 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.986064 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.990415 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-kube-api-access-5dpwh" (OuterVolumeSpecName: "kube-api-access-5dpwh") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "kube-api-access-5dpwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.991811 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe29c2e-47b8-434d-a38f-8edd2992e345-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.993711 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 22:21:40 crc kubenswrapper[4866]: I1213 22:21:40.998220 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.000216 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.006546 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe29c2e-47b8-434d-a38f-8edd2992e345-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6fe29c2e-47b8-434d-a38f-8edd2992e345" (UID: "6fe29c2e-47b8-434d-a38f-8edd2992e345"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086287 4866 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086322 4866 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086334 4866 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6fe29c2e-47b8-434d-a38f-8edd2992e345-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086344 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dpwh\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-kube-api-access-5dpwh\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086353 4866 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe29c2e-47b8-434d-a38f-8edd2992e345-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086362 4866 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe29c2e-47b8-434d-a38f-8edd2992e345-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.086371 4866 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6fe29c2e-47b8-434d-a38f-8edd2992e345-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.296934 4866 generic.go:334] "Generic (PLEG): container finished" podID="6fe29c2e-47b8-434d-a38f-8edd2992e345" containerID="0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f" exitCode=0 Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.296987 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.297005 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" event={"ID":"6fe29c2e-47b8-434d-a38f-8edd2992e345","Type":"ContainerDied","Data":"0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f"} Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.297466 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wzf86" event={"ID":"6fe29c2e-47b8-434d-a38f-8edd2992e345","Type":"ContainerDied","Data":"0f0794a222da61b0ca55054cd1bf08ff618259152d6b606bca4f07d983207fbe"} Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.297488 4866 scope.go:117] "RemoveContainer" containerID="0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.314880 4866 scope.go:117] "RemoveContainer" containerID="0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f" Dec 13 22:21:41 crc kubenswrapper[4866]: E1213 22:21:41.315632 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f\": container with ID starting with 0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f not found: ID does not exist" containerID="0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.315788 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f"} err="failed to get container status \"0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f\": rpc error: code = NotFound desc = could not find container \"0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f\": container with ID starting with 0e99cb3975e1be7883ca57431dabdf5d7f35357b33a0b4009f785d721b39457f not found: ID does not exist" Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.326302 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wzf86"] Dec 13 22:21:41 crc kubenswrapper[4866]: I1213 22:21:41.331169 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wzf86"] Dec 13 22:21:42 crc kubenswrapper[4866]: I1213 22:21:42.079644 4866 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 13 22:21:42 crc kubenswrapper[4866]: I1213 22:21:42.224989 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe29c2e-47b8-434d-a38f-8edd2992e345" path="/var/lib/kubelet/pods/6fe29c2e-47b8-434d-a38f-8edd2992e345/volumes" Dec 13 22:21:50 crc kubenswrapper[4866]: I1213 22:21:50.291736 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-558d65cfdd-5wj99"] Dec 13 22:21:50 crc kubenswrapper[4866]: I1213 22:21:50.292485 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" podUID="8092b32f-85f8-47f5-8298-6319f103cf7a" containerName="controller-manager" containerID="cri-o://61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65" gracePeriod=30 Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.211537 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.334278 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8092b32f-85f8-47f5-8298-6319f103cf7a-serving-cert\") pod \"8092b32f-85f8-47f5-8298-6319f103cf7a\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.334358 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd4h4\" (UniqueName: \"kubernetes.io/projected/8092b32f-85f8-47f5-8298-6319f103cf7a-kube-api-access-vd4h4\") pod \"8092b32f-85f8-47f5-8298-6319f103cf7a\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.334397 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-client-ca\") pod \"8092b32f-85f8-47f5-8298-6319f103cf7a\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.334434 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-config\") pod \"8092b32f-85f8-47f5-8298-6319f103cf7a\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.334527 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-proxy-ca-bundles\") pod \"8092b32f-85f8-47f5-8298-6319f103cf7a\" (UID: \"8092b32f-85f8-47f5-8298-6319f103cf7a\") " Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.335693 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8092b32f-85f8-47f5-8298-6319f103cf7a" (UID: "8092b32f-85f8-47f5-8298-6319f103cf7a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.335846 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "8092b32f-85f8-47f5-8298-6319f103cf7a" (UID: "8092b32f-85f8-47f5-8298-6319f103cf7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.336307 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-config" (OuterVolumeSpecName: "config") pod "8092b32f-85f8-47f5-8298-6319f103cf7a" (UID: "8092b32f-85f8-47f5-8298-6319f103cf7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.340464 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8092b32f-85f8-47f5-8298-6319f103cf7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8092b32f-85f8-47f5-8298-6319f103cf7a" (UID: "8092b32f-85f8-47f5-8298-6319f103cf7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.341375 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8092b32f-85f8-47f5-8298-6319f103cf7a-kube-api-access-vd4h4" (OuterVolumeSpecName: "kube-api-access-vd4h4") pod "8092b32f-85f8-47f5-8298-6319f103cf7a" (UID: "8092b32f-85f8-47f5-8298-6319f103cf7a"). InnerVolumeSpecName "kube-api-access-vd4h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.355853 4866 generic.go:334] "Generic (PLEG): container finished" podID="8092b32f-85f8-47f5-8298-6319f103cf7a" containerID="61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65" exitCode=0 Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.355907 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" event={"ID":"8092b32f-85f8-47f5-8298-6319f103cf7a","Type":"ContainerDied","Data":"61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65"} Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.355946 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" event={"ID":"8092b32f-85f8-47f5-8298-6319f103cf7a","Type":"ContainerDied","Data":"5776164ccd19813fcc79cbaaa03e87312727b91be02b080c83edf715e5d4f218"} Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.355962 4866 scope.go:117] "RemoveContainer" containerID="61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.355889 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558d65cfdd-5wj99" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.407470 4866 scope.go:117] "RemoveContainer" containerID="61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65" Dec 13 22:21:51 crc kubenswrapper[4866]: E1213 22:21:51.411404 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65\": container with ID starting with 61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65 not found: ID does not exist" containerID="61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.411478 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65"} err="failed to get container status \"61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65\": rpc error: code = NotFound desc = could not find container \"61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65\": container with ID starting with 61dc444238b690e5a12eb163763f2796b152342aa631aae34fb419243d830b65 not found: ID does not exist" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.415631 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-558d65cfdd-5wj99"] Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.423660 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-558d65cfdd-5wj99"] Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.436095 4866 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8092b32f-85f8-47f5-8298-6319f103cf7a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.436140 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd4h4\" (UniqueName: \"kubernetes.io/projected/8092b32f-85f8-47f5-8298-6319f103cf7a-kube-api-access-vd4h4\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.436207 4866 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.436217 4866 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.436226 4866 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8092b32f-85f8-47f5-8298-6319f103cf7a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.877274 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-l9wvr"] Dec 13 22:21:51 crc kubenswrapper[4866]: E1213 22:21:51.877870 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe29c2e-47b8-434d-a38f-8edd2992e345" containerName="registry" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.877896 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe29c2e-47b8-434d-a38f-8edd2992e345" containerName="registry" Dec 13 22:21:51 crc kubenswrapper[4866]: E1213 22:21:51.877920 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8092b32f-85f8-47f5-8298-6319f103cf7a" containerName="controller-manager" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.877936 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="8092b32f-85f8-47f5-8298-6319f103cf7a" containerName="controller-manager" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.878083 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe29c2e-47b8-434d-a38f-8edd2992e345" containerName="registry" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.878109 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="8092b32f-85f8-47f5-8298-6319f103cf7a" containerName="controller-manager" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.878579 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.880642 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.881203 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.882255 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.882778 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.886646 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.887712 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-l9wvr"] Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.890369 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 22:21:51 crc kubenswrapper[4866]: I1213 22:21:51.912640 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.045341 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-config\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.045445 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6xp\" (UniqueName: \"kubernetes.io/projected/2892c18c-3b79-4231-b632-d4984eff2cab-kube-api-access-mw6xp\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.045493 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2892c18c-3b79-4231-b632-d4984eff2cab-serving-cert\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.045529 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-proxy-ca-bundles\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.045600 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-client-ca\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.146677 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-client-ca\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.146774 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-config\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.146818 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6xp\" (UniqueName: \"kubernetes.io/projected/2892c18c-3b79-4231-b632-d4984eff2cab-kube-api-access-mw6xp\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.146846 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2892c18c-3b79-4231-b632-d4984eff2cab-serving-cert\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.146868 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-proxy-ca-bundles\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.148216 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-client-ca\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.148662 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-config\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.148993 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2892c18c-3b79-4231-b632-d4984eff2cab-proxy-ca-bundles\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.151020 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2892c18c-3b79-4231-b632-d4984eff2cab-serving-cert\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.167203 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6xp\" (UniqueName: \"kubernetes.io/projected/2892c18c-3b79-4231-b632-d4984eff2cab-kube-api-access-mw6xp\") pod \"controller-manager-7f87cc8767-l9wvr\" (UID: \"2892c18c-3b79-4231-b632-d4984eff2cab\") " pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.197228 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.220000 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8092b32f-85f8-47f5-8298-6319f103cf7a" path="/var/lib/kubelet/pods/8092b32f-85f8-47f5-8298-6319f103cf7a/volumes" Dec 13 22:21:52 crc kubenswrapper[4866]: I1213 22:21:52.401891 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f87cc8767-l9wvr"] Dec 13 22:21:53 crc kubenswrapper[4866]: I1213 22:21:53.375686 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" event={"ID":"2892c18c-3b79-4231-b632-d4984eff2cab","Type":"ContainerStarted","Data":"0528c47cd7fefd07f6e253f361b0d7beadc52f2d698aad38dbe5eaef88249c88"} Dec 13 22:21:53 crc kubenswrapper[4866]: I1213 22:21:53.376363 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:53 crc kubenswrapper[4866]: I1213 22:21:53.376405 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" event={"ID":"2892c18c-3b79-4231-b632-d4984eff2cab","Type":"ContainerStarted","Data":"4359583643c324188f8757022f33d843036ae866745a4791da6825675c024ac8"} Dec 13 22:21:53 crc kubenswrapper[4866]: I1213 22:21:53.381219 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" Dec 13 22:21:53 crc kubenswrapper[4866]: I1213 22:21:53.409127 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f87cc8767-l9wvr" podStartSLOduration=3.40910798 podStartE2EDuration="3.40910798s" podCreationTimestamp="2025-12-13 22:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:21:53.39349136 +0000 UTC m=+311.434829912" watchObservedRunningTime="2025-12-13 22:21:53.40910798 +0000 UTC m=+311.450446532" Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.819103 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj6bb"] Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.820112 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sj6bb" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="registry-server" containerID="cri-o://08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84" gracePeriod=30 Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.835703 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rw67t"] Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.836436 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rw67t" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="registry-server" containerID="cri-o://dd75b0e02026e02d0df6c6baf28aa236eac032b321561b66463d203012f1c775" gracePeriod=30 Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.839769 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-brrt8"] Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.839997 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" containerID="cri-o://8f45ac33445ca44bcfb9a4085594cda6d34f2026737786582a13bd893448940a" gracePeriod=30 Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.865073 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjtlq"] Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.865713 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.876354 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hxqc"] Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.876640 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6hxqc" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="registry-server" containerID="cri-o://ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da" gracePeriod=30 Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.880314 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdvwj"] Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.880541 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdvwj" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="registry-server" containerID="cri-o://4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8" gracePeriod=30 Dec 13 22:22:19 crc kubenswrapper[4866]: I1213 22:22:19.893823 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjtlq"] Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.036572 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/814d3162-a076-488a-9162-b5154651254d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.036620 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdkt\" (UniqueName: \"kubernetes.io/projected/814d3162-a076-488a-9162-b5154651254d-kube-api-access-vcdkt\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.036657 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814d3162-a076-488a-9162-b5154651254d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.138250 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdkt\" (UniqueName: \"kubernetes.io/projected/814d3162-a076-488a-9162-b5154651254d-kube-api-access-vcdkt\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.138315 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814d3162-a076-488a-9162-b5154651254d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.138426 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/814d3162-a076-488a-9162-b5154651254d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.140242 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814d3162-a076-488a-9162-b5154651254d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.148900 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/814d3162-a076-488a-9162-b5154651254d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.154336 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdkt\" (UniqueName: \"kubernetes.io/projected/814d3162-a076-488a-9162-b5154651254d-kube-api-access-vcdkt\") pod \"marketplace-operator-79b997595-mjtlq\" (UID: \"814d3162-a076-488a-9162-b5154651254d\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.194940 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.360298 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.456751 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.460828 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.531297 4866 generic.go:334] "Generic (PLEG): container finished" podID="d019a2fd-1864-4c5b-8deb-62c898466850" containerID="dd75b0e02026e02d0df6c6baf28aa236eac032b321561b66463d203012f1c775" exitCode=0 Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.531342 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerDied","Data":"dd75b0e02026e02d0df6c6baf28aa236eac032b321561b66463d203012f1c775"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.532823 4866 generic.go:334] "Generic (PLEG): container finished" podID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerID="4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8" exitCode=0 Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.532856 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerDied","Data":"4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.532872 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvwj" event={"ID":"1a29082d-49a1-4625-9d9b-568ef75773c8","Type":"ContainerDied","Data":"c244c0242cadbe3aa7813ccc93bdf123aaf8dd49856d75462500a4a4a3e1fb24"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.532888 4866 scope.go:117] "RemoveContainer" containerID="4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.533026 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvwj" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.542589 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-catalog-content\") pod \"902bee05-89e6-48b6-becf-d715d04dd8cd\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.543345 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hxqc" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.543259 4866 generic.go:334] "Generic (PLEG): container finished" podID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerID="ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da" exitCode=0 Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.543283 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hxqc" event={"ID":"902bee05-89e6-48b6-becf-d715d04dd8cd","Type":"ContainerDied","Data":"ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.543672 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hxqc" event={"ID":"902bee05-89e6-48b6-becf-d715d04dd8cd","Type":"ContainerDied","Data":"e7ca2401eeff7540afa03fc60eb34f3e64350ccfc3a584d526e387810be2845d"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.544162 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-utilities\") pod \"902bee05-89e6-48b6-becf-d715d04dd8cd\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.544191 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-utilities\") pod \"1a29082d-49a1-4625-9d9b-568ef75773c8\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.544385 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wcf\" (UniqueName: \"kubernetes.io/projected/1a29082d-49a1-4625-9d9b-568ef75773c8-kube-api-access-d7wcf\") pod \"1a29082d-49a1-4625-9d9b-568ef75773c8\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.544418 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-catalog-content\") pod \"1a29082d-49a1-4625-9d9b-568ef75773c8\" (UID: \"1a29082d-49a1-4625-9d9b-568ef75773c8\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.544436 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-catalog-content\") pod \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.544755 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmzqx\" (UniqueName: \"kubernetes.io/projected/902bee05-89e6-48b6-becf-d715d04dd8cd-kube-api-access-hmzqx\") pod \"902bee05-89e6-48b6-becf-d715d04dd8cd\" (UID: \"902bee05-89e6-48b6-becf-d715d04dd8cd\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.545079 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-utilities" (OuterVolumeSpecName: "utilities") pod "1a29082d-49a1-4625-9d9b-568ef75773c8" (UID: "1a29082d-49a1-4625-9d9b-568ef75773c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.547119 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-utilities" (OuterVolumeSpecName: "utilities") pod "902bee05-89e6-48b6-becf-d715d04dd8cd" (UID: "902bee05-89e6-48b6-becf-d715d04dd8cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.548451 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a29082d-49a1-4625-9d9b-568ef75773c8-kube-api-access-d7wcf" (OuterVolumeSpecName: "kube-api-access-d7wcf") pod "1a29082d-49a1-4625-9d9b-568ef75773c8" (UID: "1a29082d-49a1-4625-9d9b-568ef75773c8"). InnerVolumeSpecName "kube-api-access-d7wcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.549265 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902bee05-89e6-48b6-becf-d715d04dd8cd-kube-api-access-hmzqx" (OuterVolumeSpecName: "kube-api-access-hmzqx") pod "902bee05-89e6-48b6-becf-d715d04dd8cd" (UID: "902bee05-89e6-48b6-becf-d715d04dd8cd"). InnerVolumeSpecName "kube-api-access-hmzqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.550417 4866 generic.go:334] "Generic (PLEG): container finished" podID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerID="8f45ac33445ca44bcfb9a4085594cda6d34f2026737786582a13bd893448940a" exitCode=0 Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.550490 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" event={"ID":"a82c92c0-47dc-4f29-8aa0-304a9f34f728","Type":"ContainerDied","Data":"8f45ac33445ca44bcfb9a4085594cda6d34f2026737786582a13bd893448940a"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.555248 4866 generic.go:334] "Generic (PLEG): container finished" podID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerID="08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84" exitCode=0 Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.555303 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerDied","Data":"08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.555319 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj6bb" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.555331 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj6bb" event={"ID":"a61d5864-61a4-46a9-a4f5-020d4ed879cd","Type":"ContainerDied","Data":"eb0376c50b4239a24c4fab0067448a65ba24e9f09e6571e47f2c2c9d0c157e81"} Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.569497 4866 scope.go:117] "RemoveContainer" containerID="6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.589333 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wcf\" (UniqueName: \"kubernetes.io/projected/1a29082d-49a1-4625-9d9b-568ef75773c8-kube-api-access-d7wcf\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.589358 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmzqx\" (UniqueName: \"kubernetes.io/projected/902bee05-89e6-48b6-becf-d715d04dd8cd-kube-api-access-hmzqx\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.589368 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.589394 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.602556 4866 scope.go:117] "RemoveContainer" containerID="491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.611526 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "902bee05-89e6-48b6-becf-d715d04dd8cd" (UID: "902bee05-89e6-48b6-becf-d715d04dd8cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.627507 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a61d5864-61a4-46a9-a4f5-020d4ed879cd" (UID: "a61d5864-61a4-46a9-a4f5-020d4ed879cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.652772 4866 scope.go:117] "RemoveContainer" containerID="4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.653372 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8\": container with ID starting with 4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8 not found: ID does not exist" containerID="4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.653480 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8"} err="failed to get container status \"4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8\": rpc error: code = NotFound desc = could not find container \"4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8\": container with ID starting with 4406f8808966fae1ee8e0386b8ed67563049a20cad7eca03e80446550de080c8 not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.653565 4866 scope.go:117] "RemoveContainer" containerID="6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.653840 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770\": container with ID starting with 6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770 not found: ID does not exist" containerID="6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.653911 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770"} err="failed to get container status \"6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770\": rpc error: code = NotFound desc = could not find container \"6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770\": container with ID starting with 6a9489ac5a0bc95cf1c84de0c1e9fb176642788ccdfaac58a80279af8b166770 not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.653973 4866 scope.go:117] "RemoveContainer" containerID="491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.654218 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d\": container with ID starting with 491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d not found: ID does not exist" containerID="491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.654293 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d"} err="failed to get container status \"491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d\": rpc error: code = NotFound desc = could not find container \"491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d\": container with ID starting with 491bfe3b44955d571343f182aec320862a1381c3bc5511c8498544f9ba98bb0d not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.654388 4866 scope.go:117] "RemoveContainer" containerID="ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.700628 4866 scope.go:117] "RemoveContainer" containerID="4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.701275 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trz64\" (UniqueName: \"kubernetes.io/projected/a61d5864-61a4-46a9-a4f5-020d4ed879cd-kube-api-access-trz64\") pod \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.701389 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-utilities\") pod \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\" (UID: \"a61d5864-61a4-46a9-a4f5-020d4ed879cd\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.701651 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.701671 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902bee05-89e6-48b6-becf-d715d04dd8cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.702529 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-utilities" (OuterVolumeSpecName: "utilities") pod "a61d5864-61a4-46a9-a4f5-020d4ed879cd" (UID: "a61d5864-61a4-46a9-a4f5-020d4ed879cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.704887 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61d5864-61a4-46a9-a4f5-020d4ed879cd-kube-api-access-trz64" (OuterVolumeSpecName: "kube-api-access-trz64") pod "a61d5864-61a4-46a9-a4f5-020d4ed879cd" (UID: "a61d5864-61a4-46a9-a4f5-020d4ed879cd"). InnerVolumeSpecName "kube-api-access-trz64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.714467 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a29082d-49a1-4625-9d9b-568ef75773c8" (UID: "1a29082d-49a1-4625-9d9b-568ef75773c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.717752 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjtlq"] Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.728617 4866 scope.go:117] "RemoveContainer" containerID="af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.739954 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.759830 4866 scope.go:117] "RemoveContainer" containerID="ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.760349 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da\": container with ID starting with ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da not found: ID does not exist" containerID="ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.760392 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da"} err="failed to get container status \"ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da\": rpc error: code = NotFound desc = could not find container \"ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da\": container with ID starting with ecc2c3d86c114b82c821135f61519c0ab4b88b6502546ecc9eaf5dad972df5da not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.760419 4866 scope.go:117] "RemoveContainer" containerID="4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.760976 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0\": container with ID starting with 4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0 not found: ID does not exist" containerID="4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.761015 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0"} err="failed to get container status \"4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0\": rpc error: code = NotFound desc = could not find container \"4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0\": container with ID starting with 4a0074b5751ce246885605ef8e78060c988e1add43b50a2ad9c12000bef2f3f0 not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.761043 4866 scope.go:117] "RemoveContainer" containerID="af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.763346 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa\": container with ID starting with af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa not found: ID does not exist" containerID="af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.763434 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa"} err="failed to get container status \"af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa\": rpc error: code = NotFound desc = could not find container \"af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa\": container with ID starting with af39cce1b3a98f4219372a21bd3cd179fbfe716dc919a6f823e87f927c91c1fa not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.763514 4866 scope.go:117] "RemoveContainer" containerID="08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.783487 4866 scope.go:117] "RemoveContainer" containerID="1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.803119 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-operator-metrics\") pod \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.803949 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv2xv\" (UniqueName: \"kubernetes.io/projected/a82c92c0-47dc-4f29-8aa0-304a9f34f728-kube-api-access-dv2xv\") pod \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.803996 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-trusted-ca\") pod \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\" (UID: \"a82c92c0-47dc-4f29-8aa0-304a9f34f728\") " Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.804221 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a29082d-49a1-4625-9d9b-568ef75773c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.804239 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trz64\" (UniqueName: \"kubernetes.io/projected/a61d5864-61a4-46a9-a4f5-020d4ed879cd-kube-api-access-trz64\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.804250 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61d5864-61a4-46a9-a4f5-020d4ed879cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.804707 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a82c92c0-47dc-4f29-8aa0-304a9f34f728" (UID: "a82c92c0-47dc-4f29-8aa0-304a9f34f728"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.806699 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a82c92c0-47dc-4f29-8aa0-304a9f34f728" (UID: "a82c92c0-47dc-4f29-8aa0-304a9f34f728"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.809541 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82c92c0-47dc-4f29-8aa0-304a9f34f728-kube-api-access-dv2xv" (OuterVolumeSpecName: "kube-api-access-dv2xv") pod "a82c92c0-47dc-4f29-8aa0-304a9f34f728" (UID: "a82c92c0-47dc-4f29-8aa0-304a9f34f728"). InnerVolumeSpecName "kube-api-access-dv2xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.841284 4866 scope.go:117] "RemoveContainer" containerID="091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.862519 4866 scope.go:117] "RemoveContainer" containerID="08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.870466 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdvwj"] Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.871811 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84\": container with ID starting with 08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84 not found: ID does not exist" containerID="08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.872168 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84"} err="failed to get container status \"08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84\": rpc error: code = NotFound desc = could not find container \"08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84\": container with ID starting with 08c4c96ab9b1351e6c4d8cee0c86bc74aa979f2d7a75a0ef12cb830724aabf84 not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.872204 4866 scope.go:117] "RemoveContainer" containerID="1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.872609 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302\": container with ID starting with 1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302 not found: ID does not exist" containerID="1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.872654 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302"} err="failed to get container status \"1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302\": rpc error: code = NotFound desc = could not find container \"1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302\": container with ID starting with 1600053e1652e0d302818c442e0df67c90e0dbb7445b391489668d3d3fbfb302 not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.872685 4866 scope.go:117] "RemoveContainer" containerID="091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf" Dec 13 22:22:20 crc kubenswrapper[4866]: E1213 22:22:20.872970 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf\": container with ID starting with 091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf not found: ID does not exist" containerID="091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.872991 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf"} err="failed to get container status \"091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf\": rpc error: code = NotFound desc = could not find container \"091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf\": container with ID starting with 091874fd34e0eb0d9d4e6791815eebd35ea4588d1bd65aa58e39971702c7d4cf not found: ID does not exist" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.899180 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdvwj"] Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.904784 4866 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.904805 4866 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a82c92c0-47dc-4f29-8aa0-304a9f34f728-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.904814 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv2xv\" (UniqueName: \"kubernetes.io/projected/a82c92c0-47dc-4f29-8aa0-304a9f34f728-kube-api-access-dv2xv\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.904835 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hxqc"] Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.907191 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hxqc"] Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.908489 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.915272 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj6bb"] Dec 13 22:22:20 crc kubenswrapper[4866]: I1213 22:22:20.921014 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sj6bb"] Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.005624 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-utilities\") pod \"d019a2fd-1864-4c5b-8deb-62c898466850\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.005917 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2wcb\" (UniqueName: \"kubernetes.io/projected/d019a2fd-1864-4c5b-8deb-62c898466850-kube-api-access-h2wcb\") pod \"d019a2fd-1864-4c5b-8deb-62c898466850\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.005945 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-catalog-content\") pod \"d019a2fd-1864-4c5b-8deb-62c898466850\" (UID: \"d019a2fd-1864-4c5b-8deb-62c898466850\") " Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.006438 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-utilities" (OuterVolumeSpecName: "utilities") pod "d019a2fd-1864-4c5b-8deb-62c898466850" (UID: "d019a2fd-1864-4c5b-8deb-62c898466850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.009827 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d019a2fd-1864-4c5b-8deb-62c898466850-kube-api-access-h2wcb" (OuterVolumeSpecName: "kube-api-access-h2wcb") pod "d019a2fd-1864-4c5b-8deb-62c898466850" (UID: "d019a2fd-1864-4c5b-8deb-62c898466850"). InnerVolumeSpecName "kube-api-access-h2wcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.050185 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d019a2fd-1864-4c5b-8deb-62c898466850" (UID: "d019a2fd-1864-4c5b-8deb-62c898466850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.107295 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.107328 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2wcb\" (UniqueName: \"kubernetes.io/projected/d019a2fd-1864-4c5b-8deb-62c898466850-kube-api-access-h2wcb\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.107340 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d019a2fd-1864-4c5b-8deb-62c898466850-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.568668 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" event={"ID":"814d3162-a076-488a-9162-b5154651254d","Type":"ContainerStarted","Data":"38bb11ce71e6814344dc8284f9363c82c6b749d4a051a97a2c93fd542be5565c"} Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.568710 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" event={"ID":"814d3162-a076-488a-9162-b5154651254d","Type":"ContainerStarted","Data":"26f6d75435721d02fc4e9265eee7d058ffa4ac549b0df2b5d885af30a12d883f"} Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.569612 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.572646 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.577170 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" event={"ID":"a82c92c0-47dc-4f29-8aa0-304a9f34f728","Type":"ContainerDied","Data":"ed73c08f9dcdc0db47230a679668ce4d8b2fb0923233590cd9edc4002bf4dffa"} Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.577214 4866 scope.go:117] "RemoveContainer" containerID="8f45ac33445ca44bcfb9a4085594cda6d34f2026737786582a13bd893448940a" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.577277 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-brrt8" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.602289 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mjtlq" podStartSLOduration=2.602270328 podStartE2EDuration="2.602270328s" podCreationTimestamp="2025-12-13 22:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:22:21.594913817 +0000 UTC m=+339.636252359" watchObservedRunningTime="2025-12-13 22:22:21.602270328 +0000 UTC m=+339.643608880" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.602394 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rw67t" event={"ID":"d019a2fd-1864-4c5b-8deb-62c898466850","Type":"ContainerDied","Data":"ae6c071ec73ad9491f98a3d0f12668034fb1297bc2d38971c35b83f7b60d9d46"} Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.602482 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rw67t" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.627724 4866 scope.go:117] "RemoveContainer" containerID="dd75b0e02026e02d0df6c6baf28aa236eac032b321561b66463d203012f1c775" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.631606 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-brrt8"] Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.644527 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-brrt8"] Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.662189 4866 scope.go:117] "RemoveContainer" containerID="755edc36d51cf054e5c2a87215e40b6b12ffcf129e33d41d8e49f714abcf2b57" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.662321 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rw67t"] Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.669908 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8bb"] Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670639 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670663 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670680 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670690 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670706 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670714 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670725 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670733 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670744 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670754 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670766 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670776 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670789 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670796 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670808 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670816 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670827 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670835 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670846 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670854 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670920 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670930 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="extract-content" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670943 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670951 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="extract-utilities" Dec 13 22:22:21 crc kubenswrapper[4866]: E1213 22:22:21.670964 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.670974 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.671110 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" containerName="marketplace-operator" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.671132 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.671142 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.671154 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.671163 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" containerName="registry-server" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.675163 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rw67t"] Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.675280 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.679269 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8bb"] Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.680764 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.692261 4866 scope.go:117] "RemoveContainer" containerID="856bf9dace1a46b85e52c79071387a8e472a15e3e72eaebf8f1b9e7e509990d8" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.815504 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d9cb6c-33b5-426f-9302-645ee78042d8-utilities\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.815807 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hh7\" (UniqueName: \"kubernetes.io/projected/c3d9cb6c-33b5-426f-9302-645ee78042d8-kube-api-access-j6hh7\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.815998 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d9cb6c-33b5-426f-9302-645ee78042d8-catalog-content\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.917248 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d9cb6c-33b5-426f-9302-645ee78042d8-catalog-content\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.917295 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d9cb6c-33b5-426f-9302-645ee78042d8-utilities\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.917329 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6hh7\" (UniqueName: \"kubernetes.io/projected/c3d9cb6c-33b5-426f-9302-645ee78042d8-kube-api-access-j6hh7\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.918302 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d9cb6c-33b5-426f-9302-645ee78042d8-catalog-content\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.918656 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d9cb6c-33b5-426f-9302-645ee78042d8-utilities\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:21 crc kubenswrapper[4866]: I1213 22:22:21.934556 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6hh7\" (UniqueName: \"kubernetes.io/projected/c3d9cb6c-33b5-426f-9302-645ee78042d8-kube-api-access-j6hh7\") pod \"redhat-marketplace-2x8bb\" (UID: \"c3d9cb6c-33b5-426f-9302-645ee78042d8\") " pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.007309 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.219545 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a29082d-49a1-4625-9d9b-568ef75773c8" path="/var/lib/kubelet/pods/1a29082d-49a1-4625-9d9b-568ef75773c8/volumes" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.220446 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902bee05-89e6-48b6-becf-d715d04dd8cd" path="/var/lib/kubelet/pods/902bee05-89e6-48b6-becf-d715d04dd8cd/volumes" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.220980 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61d5864-61a4-46a9-a4f5-020d4ed879cd" path="/var/lib/kubelet/pods/a61d5864-61a4-46a9-a4f5-020d4ed879cd/volumes" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.221980 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82c92c0-47dc-4f29-8aa0-304a9f34f728" path="/var/lib/kubelet/pods/a82c92c0-47dc-4f29-8aa0-304a9f34f728/volumes" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.222476 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d019a2fd-1864-4c5b-8deb-62c898466850" path="/var/lib/kubelet/pods/d019a2fd-1864-4c5b-8deb-62c898466850/volumes" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.418179 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8bb"] Dec 13 22:22:22 crc kubenswrapper[4866]: W1213 22:22:22.443737 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d9cb6c_33b5_426f_9302_645ee78042d8.slice/crio-b651d1b97ddcdc79088523b9f8f03cfe1f7f2f4c0c985c35c1837909687581c7 WatchSource:0}: Error finding container b651d1b97ddcdc79088523b9f8f03cfe1f7f2f4c0c985c35c1837909687581c7: Status 404 returned error can't find the container with id b651d1b97ddcdc79088523b9f8f03cfe1f7f2f4c0c985c35c1837909687581c7 Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.625602 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8bb" event={"ID":"c3d9cb6c-33b5-426f-9302-645ee78042d8","Type":"ContainerDied","Data":"dff43aa70da50a296bf9c7bc4e48c17ebdd07ae1d1d17c9231c13a9a7dbac0a3"} Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.627001 4866 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cb6c-33b5-426f-9302-645ee78042d8" containerID="dff43aa70da50a296bf9c7bc4e48c17ebdd07ae1d1d17c9231c13a9a7dbac0a3" exitCode=0 Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.627203 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8bb" event={"ID":"c3d9cb6c-33b5-426f-9302-645ee78042d8","Type":"ContainerStarted","Data":"b651d1b97ddcdc79088523b9f8f03cfe1f7f2f4c0c985c35c1837909687581c7"} Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.670165 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-khmc8"] Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.671278 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.673165 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.677799 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khmc8"] Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.736408 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a5d2fc-f001-494b-aa01-cb967df89845-catalog-content\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.736462 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a5d2fc-f001-494b-aa01-cb967df89845-utilities\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.736738 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5w6\" (UniqueName: \"kubernetes.io/projected/70a5d2fc-f001-494b-aa01-cb967df89845-kube-api-access-nx5w6\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.838138 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5w6\" (UniqueName: \"kubernetes.io/projected/70a5d2fc-f001-494b-aa01-cb967df89845-kube-api-access-nx5w6\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.838441 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a5d2fc-f001-494b-aa01-cb967df89845-catalog-content\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.838549 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a5d2fc-f001-494b-aa01-cb967df89845-utilities\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.838915 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a5d2fc-f001-494b-aa01-cb967df89845-catalog-content\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.839029 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a5d2fc-f001-494b-aa01-cb967df89845-utilities\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.859458 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5w6\" (UniqueName: \"kubernetes.io/projected/70a5d2fc-f001-494b-aa01-cb967df89845-kube-api-access-nx5w6\") pod \"certified-operators-khmc8\" (UID: \"70a5d2fc-f001-494b-aa01-cb967df89845\") " pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:22 crc kubenswrapper[4866]: I1213 22:22:22.997884 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:23 crc kubenswrapper[4866]: I1213 22:22:23.428705 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khmc8"] Dec 13 22:22:23 crc kubenswrapper[4866]: W1213 22:22:23.437651 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70a5d2fc_f001_494b_aa01_cb967df89845.slice/crio-4cf8bf22f093467703623a83dd3900fa09d1607bc4650d4e750cbaadb9fbda1d WatchSource:0}: Error finding container 4cf8bf22f093467703623a83dd3900fa09d1607bc4650d4e750cbaadb9fbda1d: Status 404 returned error can't find the container with id 4cf8bf22f093467703623a83dd3900fa09d1607bc4650d4e750cbaadb9fbda1d Dec 13 22:22:23 crc kubenswrapper[4866]: I1213 22:22:23.633179 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8bb" event={"ID":"c3d9cb6c-33b5-426f-9302-645ee78042d8","Type":"ContainerStarted","Data":"372827ca6244e12ee5a6c524cbd8c73f508a35aa8c983e0e6478dac18706b3b0"} Dec 13 22:22:23 crc kubenswrapper[4866]: I1213 22:22:23.634592 4866 generic.go:334] "Generic (PLEG): container finished" podID="70a5d2fc-f001-494b-aa01-cb967df89845" containerID="02b1ec3db0bb55208a5c88c04552b48d3a4c16508a9a35ffbd7603b797eabcfc" exitCode=0 Dec 13 22:22:23 crc kubenswrapper[4866]: I1213 22:22:23.634649 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khmc8" event={"ID":"70a5d2fc-f001-494b-aa01-cb967df89845","Type":"ContainerDied","Data":"02b1ec3db0bb55208a5c88c04552b48d3a4c16508a9a35ffbd7603b797eabcfc"} Dec 13 22:22:23 crc kubenswrapper[4866]: I1213 22:22:23.634675 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khmc8" event={"ID":"70a5d2fc-f001-494b-aa01-cb967df89845","Type":"ContainerStarted","Data":"4cf8bf22f093467703623a83dd3900fa09d1607bc4650d4e750cbaadb9fbda1d"} Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.034420 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtkpz"] Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.035637 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.039457 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.041829 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtkpz"] Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.158263 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106fc235-f259-42cd-9457-61b834ebe9e4-utilities\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.158297 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106fc235-f259-42cd-9457-61b834ebe9e4-catalog-content\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.158356 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58v5\" (UniqueName: \"kubernetes.io/projected/106fc235-f259-42cd-9457-61b834ebe9e4-kube-api-access-s58v5\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.259473 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106fc235-f259-42cd-9457-61b834ebe9e4-catalog-content\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.259510 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106fc235-f259-42cd-9457-61b834ebe9e4-utilities\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.259569 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s58v5\" (UniqueName: \"kubernetes.io/projected/106fc235-f259-42cd-9457-61b834ebe9e4-kube-api-access-s58v5\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.261320 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106fc235-f259-42cd-9457-61b834ebe9e4-catalog-content\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.261725 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106fc235-f259-42cd-9457-61b834ebe9e4-utilities\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.279214 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s58v5\" (UniqueName: \"kubernetes.io/projected/106fc235-f259-42cd-9457-61b834ebe9e4-kube-api-access-s58v5\") pod \"redhat-operators-xtkpz\" (UID: \"106fc235-f259-42cd-9457-61b834ebe9e4\") " pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.358175 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.640664 4866 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cb6c-33b5-426f-9302-645ee78042d8" containerID="372827ca6244e12ee5a6c524cbd8c73f508a35aa8c983e0e6478dac18706b3b0" exitCode=0 Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.640777 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8bb" event={"ID":"c3d9cb6c-33b5-426f-9302-645ee78042d8","Type":"ContainerDied","Data":"372827ca6244e12ee5a6c524cbd8c73f508a35aa8c983e0e6478dac18706b3b0"} Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.642713 4866 generic.go:334] "Generic (PLEG): container finished" podID="70a5d2fc-f001-494b-aa01-cb967df89845" containerID="40caaef7a0a91f695ff5001a41e83748627f2e67efab40b8b01d5de2faa8d1ae" exitCode=0 Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.642767 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khmc8" event={"ID":"70a5d2fc-f001-494b-aa01-cb967df89845","Type":"ContainerDied","Data":"40caaef7a0a91f695ff5001a41e83748627f2e67efab40b8b01d5de2faa8d1ae"} Dec 13 22:22:24 crc kubenswrapper[4866]: I1213 22:22:24.748947 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtkpz"] Dec 13 22:22:24 crc kubenswrapper[4866]: W1213 22:22:24.754550 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod106fc235_f259_42cd_9457_61b834ebe9e4.slice/crio-64b0657f01a2b3d4287b99be581f4586731d2a5762f3837c265eb73552ad981d WatchSource:0}: Error finding container 64b0657f01a2b3d4287b99be581f4586731d2a5762f3837c265eb73552ad981d: Status 404 returned error can't find the container with id 64b0657f01a2b3d4287b99be581f4586731d2a5762f3837c265eb73552ad981d Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.029848 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4lk5n"] Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.030732 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.038772 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.053040 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4lk5n"] Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.173733 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fcbe-339e-4276-852c-0f959263f9d4-catalog-content\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.173811 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fcbe-339e-4276-852c-0f959263f9d4-utilities\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.173850 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z559t\" (UniqueName: \"kubernetes.io/projected/e8d7fcbe-339e-4276-852c-0f959263f9d4-kube-api-access-z559t\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.274956 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z559t\" (UniqueName: \"kubernetes.io/projected/e8d7fcbe-339e-4276-852c-0f959263f9d4-kube-api-access-z559t\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.275252 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fcbe-339e-4276-852c-0f959263f9d4-catalog-content\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.275473 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fcbe-339e-4276-852c-0f959263f9d4-utilities\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.275753 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fcbe-339e-4276-852c-0f959263f9d4-catalog-content\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.275874 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fcbe-339e-4276-852c-0f959263f9d4-utilities\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.296736 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z559t\" (UniqueName: \"kubernetes.io/projected/e8d7fcbe-339e-4276-852c-0f959263f9d4-kube-api-access-z559t\") pod \"community-operators-4lk5n\" (UID: \"e8d7fcbe-339e-4276-852c-0f959263f9d4\") " pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:25 crc kubenswrapper[4866]: I1213 22:22:25.352488 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.650083 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8bb" event={"ID":"c3d9cb6c-33b5-426f-9302-645ee78042d8","Type":"ContainerStarted","Data":"54c571904f6b290f25557e34799e60cf1bffa795bc321bd1d837057580c009f4"} Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.653465 4866 generic.go:334] "Generic (PLEG): container finished" podID="106fc235-f259-42cd-9457-61b834ebe9e4" containerID="cb8e3fc9c0e10ba305f42b720eeb914ec0eedfa792adbba633f4c8ebe642e898" exitCode=0 Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.653570 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkpz" event={"ID":"106fc235-f259-42cd-9457-61b834ebe9e4","Type":"ContainerDied","Data":"cb8e3fc9c0e10ba305f42b720eeb914ec0eedfa792adbba633f4c8ebe642e898"} Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.653596 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkpz" event={"ID":"106fc235-f259-42cd-9457-61b834ebe9e4","Type":"ContainerStarted","Data":"64b0657f01a2b3d4287b99be581f4586731d2a5762f3837c265eb73552ad981d"} Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.658503 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khmc8" event={"ID":"70a5d2fc-f001-494b-aa01-cb967df89845","Type":"ContainerStarted","Data":"1b9065f47f005f90e778cf9229d2aa6f2ad2e77f1fd8ca23410cf7222d671e0c"} Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.669699 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2x8bb" podStartSLOduration=2.198570443 podStartE2EDuration="4.669682059s" podCreationTimestamp="2025-12-13 22:22:21 +0000 UTC" firstStartedPulling="2025-12-13 22:22:22.638906452 +0000 UTC m=+340.680245004" lastFinishedPulling="2025-12-13 22:22:25.110018078 +0000 UTC m=+343.151356620" observedRunningTime="2025-12-13 22:22:25.666116586 +0000 UTC m=+343.707455138" watchObservedRunningTime="2025-12-13 22:22:25.669682059 +0000 UTC m=+343.711020611" Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:25.685695 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-khmc8" podStartSLOduration=2.108572889 podStartE2EDuration="3.685674574s" podCreationTimestamp="2025-12-13 22:22:22 +0000 UTC" firstStartedPulling="2025-12-13 22:22:23.635718122 +0000 UTC m=+341.677056674" lastFinishedPulling="2025-12-13 22:22:25.212819807 +0000 UTC m=+343.254158359" observedRunningTime="2025-12-13 22:22:25.684682728 +0000 UTC m=+343.726021280" watchObservedRunningTime="2025-12-13 22:22:25.685674574 +0000 UTC m=+343.727013126" Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:26.600988 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4lk5n"] Dec 13 22:22:26 crc kubenswrapper[4866]: W1213 22:22:26.610626 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d7fcbe_339e_4276_852c_0f959263f9d4.slice/crio-5629b616a857591730936b1077ed403b332599987c77e1a8c552d04d8e0ccda9 WatchSource:0}: Error finding container 5629b616a857591730936b1077ed403b332599987c77e1a8c552d04d8e0ccda9: Status 404 returned error can't find the container with id 5629b616a857591730936b1077ed403b332599987c77e1a8c552d04d8e0ccda9 Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:26.667033 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lk5n" event={"ID":"e8d7fcbe-339e-4276-852c-0f959263f9d4","Type":"ContainerStarted","Data":"5629b616a857591730936b1077ed403b332599987c77e1a8c552d04d8e0ccda9"} Dec 13 22:22:26 crc kubenswrapper[4866]: I1213 22:22:26.669002 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkpz" event={"ID":"106fc235-f259-42cd-9457-61b834ebe9e4","Type":"ContainerStarted","Data":"74e9ff233f6a04c6ae2c502eb93011be8e896c86eaf966752109b3a7c8b4cf00"} Dec 13 22:22:27 crc kubenswrapper[4866]: I1213 22:22:27.675761 4866 generic.go:334] "Generic (PLEG): container finished" podID="106fc235-f259-42cd-9457-61b834ebe9e4" containerID="74e9ff233f6a04c6ae2c502eb93011be8e896c86eaf966752109b3a7c8b4cf00" exitCode=0 Dec 13 22:22:27 crc kubenswrapper[4866]: I1213 22:22:27.675804 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkpz" event={"ID":"106fc235-f259-42cd-9457-61b834ebe9e4","Type":"ContainerDied","Data":"74e9ff233f6a04c6ae2c502eb93011be8e896c86eaf966752109b3a7c8b4cf00"} Dec 13 22:22:27 crc kubenswrapper[4866]: I1213 22:22:27.677598 4866 generic.go:334] "Generic (PLEG): container finished" podID="e8d7fcbe-339e-4276-852c-0f959263f9d4" containerID="84928fc69a4b64b7b2d8d782e69a6927d3f5ace30bfb9c32ac89ecfbec28d499" exitCode=0 Dec 13 22:22:27 crc kubenswrapper[4866]: I1213 22:22:27.677626 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lk5n" event={"ID":"e8d7fcbe-339e-4276-852c-0f959263f9d4","Type":"ContainerDied","Data":"84928fc69a4b64b7b2d8d782e69a6927d3f5ace30bfb9c32ac89ecfbec28d499"} Dec 13 22:22:28 crc kubenswrapper[4866]: I1213 22:22:28.684628 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkpz" event={"ID":"106fc235-f259-42cd-9457-61b834ebe9e4","Type":"ContainerStarted","Data":"b12f91df5427d7cc8cd903c886474bfbae0301245ebe11cfb9b8bc3aa179d7bb"} Dec 13 22:22:28 crc kubenswrapper[4866]: I1213 22:22:28.687347 4866 generic.go:334] "Generic (PLEG): container finished" podID="e8d7fcbe-339e-4276-852c-0f959263f9d4" containerID="0796658ce6b07f2dbdd05261dcf84bed9d32058ddb817ca71c82336a39575e0a" exitCode=0 Dec 13 22:22:28 crc kubenswrapper[4866]: I1213 22:22:28.687380 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lk5n" event={"ID":"e8d7fcbe-339e-4276-852c-0f959263f9d4","Type":"ContainerDied","Data":"0796658ce6b07f2dbdd05261dcf84bed9d32058ddb817ca71c82336a39575e0a"} Dec 13 22:22:28 crc kubenswrapper[4866]: I1213 22:22:28.705186 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtkpz" podStartSLOduration=2.109545428 podStartE2EDuration="4.705172898s" podCreationTimestamp="2025-12-13 22:22:24 +0000 UTC" firstStartedPulling="2025-12-13 22:22:25.655348296 +0000 UTC m=+343.696686848" lastFinishedPulling="2025-12-13 22:22:28.250975766 +0000 UTC m=+346.292314318" observedRunningTime="2025-12-13 22:22:28.70409391 +0000 UTC m=+346.745432462" watchObservedRunningTime="2025-12-13 22:22:28.705172898 +0000 UTC m=+346.746511450" Dec 13 22:22:31 crc kubenswrapper[4866]: I1213 22:22:31.705330 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lk5n" event={"ID":"e8d7fcbe-339e-4276-852c-0f959263f9d4","Type":"ContainerStarted","Data":"791a790e9dc41bcf712ec4cfbff9e527b8a7897fcd3071b501ee7ddfc61593d6"} Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.008337 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.008396 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.054626 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.071014 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4lk5n" podStartSLOduration=3.667624453 podStartE2EDuration="7.070999094s" podCreationTimestamp="2025-12-13 22:22:25 +0000 UTC" firstStartedPulling="2025-12-13 22:22:27.684272473 +0000 UTC m=+345.725611045" lastFinishedPulling="2025-12-13 22:22:31.087647134 +0000 UTC m=+349.128985686" observedRunningTime="2025-12-13 22:22:31.724454286 +0000 UTC m=+349.765792838" watchObservedRunningTime="2025-12-13 22:22:32.070999094 +0000 UTC m=+350.112337646" Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.747677 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2x8bb" Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.998973 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:32 crc kubenswrapper[4866]: I1213 22:22:32.999034 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:33 crc kubenswrapper[4866]: I1213 22:22:33.036300 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:22:33 crc kubenswrapper[4866]: I1213 22:22:33.036369 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:22:33 crc kubenswrapper[4866]: I1213 22:22:33.041352 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:33 crc kubenswrapper[4866]: I1213 22:22:33.750160 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-khmc8" Dec 13 22:22:34 crc kubenswrapper[4866]: I1213 22:22:34.359197 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:34 crc kubenswrapper[4866]: I1213 22:22:34.359234 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:34 crc kubenswrapper[4866]: I1213 22:22:34.399195 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:34 crc kubenswrapper[4866]: I1213 22:22:34.761987 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtkpz" Dec 13 22:22:35 crc kubenswrapper[4866]: I1213 22:22:35.352910 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:35 crc kubenswrapper[4866]: I1213 22:22:35.353636 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:35 crc kubenswrapper[4866]: I1213 22:22:35.394392 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:22:36 crc kubenswrapper[4866]: I1213 22:22:36.773646 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4lk5n" Dec 13 22:23:03 crc kubenswrapper[4866]: I1213 22:23:03.036185 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:23:03 crc kubenswrapper[4866]: I1213 22:23:03.036815 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:23:33 crc kubenswrapper[4866]: I1213 22:23:33.036223 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:23:33 crc kubenswrapper[4866]: I1213 22:23:33.036649 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:23:33 crc kubenswrapper[4866]: I1213 22:23:33.036701 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:23:33 crc kubenswrapper[4866]: I1213 22:23:33.037346 4866 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccf6d32a3a2c377b5e2390750299497b5183e20a129ad47bea0ebd1b2415fb34"} pod="openshift-machine-config-operator/machine-config-daemon-2855n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 22:23:33 crc kubenswrapper[4866]: I1213 22:23:33.037403 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" containerID="cri-o://ccf6d32a3a2c377b5e2390750299497b5183e20a129ad47bea0ebd1b2415fb34" gracePeriod=600 Dec 13 22:23:34 crc kubenswrapper[4866]: I1213 22:23:34.062191 4866 generic.go:334] "Generic (PLEG): container finished" podID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerID="ccf6d32a3a2c377b5e2390750299497b5183e20a129ad47bea0ebd1b2415fb34" exitCode=0 Dec 13 22:23:34 crc kubenswrapper[4866]: I1213 22:23:34.062223 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerDied","Data":"ccf6d32a3a2c377b5e2390750299497b5183e20a129ad47bea0ebd1b2415fb34"} Dec 13 22:23:34 crc kubenswrapper[4866]: I1213 22:23:34.062539 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"ad1a6b23e698b82e4bfd39f1d3e7801aec506999eb17e78abc5dcf82b4f13e6f"} Dec 13 22:23:34 crc kubenswrapper[4866]: I1213 22:23:34.062580 4866 scope.go:117] "RemoveContainer" containerID="26c741b622f1c2bb9339ddf226c75ab27cc68c4989f5c53436b7bf6e42b82176" Dec 13 22:23:42 crc kubenswrapper[4866]: I1213 22:23:42.584989 4866 scope.go:117] "RemoveContainer" containerID="e38c008ded3b789f11f0a7bdf80d7ea252361bbd658f54813f58b3cee7183515" Dec 13 22:25:33 crc kubenswrapper[4866]: I1213 22:25:33.036093 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:25:33 crc kubenswrapper[4866]: I1213 22:25:33.036638 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:26:03 crc kubenswrapper[4866]: I1213 22:26:03.035955 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:26:03 crc kubenswrapper[4866]: I1213 22:26:03.036721 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.036403 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.036975 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.037017 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.037673 4866 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad1a6b23e698b82e4bfd39f1d3e7801aec506999eb17e78abc5dcf82b4f13e6f"} pod="openshift-machine-config-operator/machine-config-daemon-2855n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.037737 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" containerID="cri-o://ad1a6b23e698b82e4bfd39f1d3e7801aec506999eb17e78abc5dcf82b4f13e6f" gracePeriod=600 Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.578635 4866 generic.go:334] "Generic (PLEG): container finished" podID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerID="ad1a6b23e698b82e4bfd39f1d3e7801aec506999eb17e78abc5dcf82b4f13e6f" exitCode=0 Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.578707 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerDied","Data":"ad1a6b23e698b82e4bfd39f1d3e7801aec506999eb17e78abc5dcf82b4f13e6f"} Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.579162 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"ae5acdfbac1d0aef7817a1dd14d929d1361d26782bef23ac5515732bb6d98ab6"} Dec 13 22:26:33 crc kubenswrapper[4866]: I1213 22:26:33.579252 4866 scope.go:117] "RemoveContainer" containerID="ccf6d32a3a2c377b5e2390750299497b5183e20a129ad47bea0ebd1b2415fb34" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.134308 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dk4x7"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.135822 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dk4x7" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.138461 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.138716 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.138898 4866 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4lmcm" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.138913 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mv76r"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.139566 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.144034 4866 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-f4t96" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.155074 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mv76r"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.162470 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dk4x7"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.185638 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7w2tf"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.186482 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.188410 4866 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lrc4h" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.206900 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7w2tf"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.278655 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66llt\" (UniqueName: \"kubernetes.io/projected/1cfa97e6-d9ad-4e17-925b-293ddca0f525-kube-api-access-66llt\") pod \"cert-manager-webhook-5655c58dd6-7w2tf\" (UID: \"1cfa97e6-d9ad-4e17-925b-293ddca0f525\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.278718 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n72vr\" (UniqueName: \"kubernetes.io/projected/7b97b708-fde5-45b4-97e4-b27430b2751a-kube-api-access-n72vr\") pod \"cert-manager-5b446d88c5-dk4x7\" (UID: \"7b97b708-fde5-45b4-97e4-b27430b2751a\") " pod="cert-manager/cert-manager-5b446d88c5-dk4x7" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.278742 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xswf\" (UniqueName: \"kubernetes.io/projected/69d95c52-0a76-4712-8380-1e38eba5643a-kube-api-access-5xswf\") pod \"cert-manager-cainjector-7f985d654d-mv76r\" (UID: \"69d95c52-0a76-4712-8380-1e38eba5643a\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.379758 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66llt\" (UniqueName: \"kubernetes.io/projected/1cfa97e6-d9ad-4e17-925b-293ddca0f525-kube-api-access-66llt\") pod \"cert-manager-webhook-5655c58dd6-7w2tf\" (UID: \"1cfa97e6-d9ad-4e17-925b-293ddca0f525\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.379841 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n72vr\" (UniqueName: \"kubernetes.io/projected/7b97b708-fde5-45b4-97e4-b27430b2751a-kube-api-access-n72vr\") pod \"cert-manager-5b446d88c5-dk4x7\" (UID: \"7b97b708-fde5-45b4-97e4-b27430b2751a\") " pod="cert-manager/cert-manager-5b446d88c5-dk4x7" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.379865 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xswf\" (UniqueName: \"kubernetes.io/projected/69d95c52-0a76-4712-8380-1e38eba5643a-kube-api-access-5xswf\") pod \"cert-manager-cainjector-7f985d654d-mv76r\" (UID: \"69d95c52-0a76-4712-8380-1e38eba5643a\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.401793 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66llt\" (UniqueName: \"kubernetes.io/projected/1cfa97e6-d9ad-4e17-925b-293ddca0f525-kube-api-access-66llt\") pod \"cert-manager-webhook-5655c58dd6-7w2tf\" (UID: \"1cfa97e6-d9ad-4e17-925b-293ddca0f525\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.403758 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xswf\" (UniqueName: \"kubernetes.io/projected/69d95c52-0a76-4712-8380-1e38eba5643a-kube-api-access-5xswf\") pod \"cert-manager-cainjector-7f985d654d-mv76r\" (UID: \"69d95c52-0a76-4712-8380-1e38eba5643a\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.405164 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n72vr\" (UniqueName: \"kubernetes.io/projected/7b97b708-fde5-45b4-97e4-b27430b2751a-kube-api-access-n72vr\") pod \"cert-manager-5b446d88c5-dk4x7\" (UID: \"7b97b708-fde5-45b4-97e4-b27430b2751a\") " pod="cert-manager/cert-manager-5b446d88c5-dk4x7" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.452076 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dk4x7" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.461959 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.500889 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.748030 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dk4x7"] Dec 13 22:27:39 crc kubenswrapper[4866]: W1213 22:27:39.754791 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b97b708_fde5_45b4_97e4_b27430b2751a.slice/crio-85b5c533eccadb29bf121b800300bc47864523e50dee86243c11fd87b63b1791 WatchSource:0}: Error finding container 85b5c533eccadb29bf121b800300bc47864523e50dee86243c11fd87b63b1791: Status 404 returned error can't find the container with id 85b5c533eccadb29bf121b800300bc47864523e50dee86243c11fd87b63b1791 Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.758660 4866 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.795996 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7w2tf"] Dec 13 22:27:39 crc kubenswrapper[4866]: I1213 22:27:39.932538 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mv76r"] Dec 13 22:27:40 crc kubenswrapper[4866]: I1213 22:27:40.271618 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" event={"ID":"1cfa97e6-d9ad-4e17-925b-293ddca0f525","Type":"ContainerStarted","Data":"1ab6c130118d82432e2900f94182d70b19152614e9a9b176b12edbc98b37edc9"} Dec 13 22:27:40 crc kubenswrapper[4866]: I1213 22:27:40.272775 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" event={"ID":"69d95c52-0a76-4712-8380-1e38eba5643a","Type":"ContainerStarted","Data":"358148fc07132564c15a7f9159dd2a024ec59755c0fd4bbd236b1dccd4d690c7"} Dec 13 22:27:40 crc kubenswrapper[4866]: I1213 22:27:40.273593 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dk4x7" event={"ID":"7b97b708-fde5-45b4-97e4-b27430b2751a","Type":"ContainerStarted","Data":"85b5c533eccadb29bf121b800300bc47864523e50dee86243c11fd87b63b1791"} Dec 13 22:27:43 crc kubenswrapper[4866]: I1213 22:27:43.291551 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" event={"ID":"1cfa97e6-d9ad-4e17-925b-293ddca0f525","Type":"ContainerStarted","Data":"c3ab40fc4782db6597d57babc61424dd4e2418410a4964740dcacca90405cdd2"} Dec 13 22:27:43 crc kubenswrapper[4866]: I1213 22:27:43.292618 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:43 crc kubenswrapper[4866]: I1213 22:27:43.293804 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dk4x7" event={"ID":"7b97b708-fde5-45b4-97e4-b27430b2751a","Type":"ContainerStarted","Data":"0f0f3fbcc47a26ca9f1b7690046d47615e7e94e0fdb5f8f3c03a394023058344"} Dec 13 22:27:43 crc kubenswrapper[4866]: I1213 22:27:43.309164 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" podStartSLOduration=1.211893319 podStartE2EDuration="4.309143466s" podCreationTimestamp="2025-12-13 22:27:39 +0000 UTC" firstStartedPulling="2025-12-13 22:27:39.800008257 +0000 UTC m=+657.841346809" lastFinishedPulling="2025-12-13 22:27:42.897258404 +0000 UTC m=+660.938596956" observedRunningTime="2025-12-13 22:27:43.308251534 +0000 UTC m=+661.349590086" watchObservedRunningTime="2025-12-13 22:27:43.309143466 +0000 UTC m=+661.350482018" Dec 13 22:27:43 crc kubenswrapper[4866]: I1213 22:27:43.341623 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-dk4x7" podStartSLOduration=1.202741965 podStartE2EDuration="4.341602401s" podCreationTimestamp="2025-12-13 22:27:39 +0000 UTC" firstStartedPulling="2025-12-13 22:27:39.758406728 +0000 UTC m=+657.799745280" lastFinishedPulling="2025-12-13 22:27:42.897267164 +0000 UTC m=+660.938605716" observedRunningTime="2025-12-13 22:27:43.324293117 +0000 UTC m=+661.365631669" watchObservedRunningTime="2025-12-13 22:27:43.341602401 +0000 UTC m=+661.382940943" Dec 13 22:27:44 crc kubenswrapper[4866]: I1213 22:27:44.299558 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" event={"ID":"69d95c52-0a76-4712-8380-1e38eba5643a","Type":"ContainerStarted","Data":"143cdb811630b847cab274af7d4f7b7d7afc1cc18e919bf110f5ba9c6cec9407"} Dec 13 22:27:44 crc kubenswrapper[4866]: I1213 22:27:44.315746 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-mv76r" podStartSLOduration=1.681214678 podStartE2EDuration="5.315726378s" podCreationTimestamp="2025-12-13 22:27:39 +0000 UTC" firstStartedPulling="2025-12-13 22:27:39.941359191 +0000 UTC m=+657.982697753" lastFinishedPulling="2025-12-13 22:27:43.575870901 +0000 UTC m=+661.617209453" observedRunningTime="2025-12-13 22:27:44.310334536 +0000 UTC m=+662.351673088" watchObservedRunningTime="2025-12-13 22:27:44.315726378 +0000 UTC m=+662.357064920" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.101856 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zrmrs"] Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.102751 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-controller" containerID="cri-o://28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.102854 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="sbdb" containerID="cri-o://01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.102887 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-acl-logging" containerID="cri-o://e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.103018 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-node" containerID="cri-o://d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.102840 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="nbdb" containerID="cri-o://b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.103026 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="northd" containerID="cri-o://43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.103283 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.148596 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovnkube-controller" containerID="cri-o://eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" gracePeriod=30 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.326687 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g6nd6_7d8d0363-27e3-4269-8bf3-33fd2cf3af5c/kube-multus/0.log" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.326750 4866 generic.go:334] "Generic (PLEG): container finished" podID="7d8d0363-27e3-4269-8bf3-33fd2cf3af5c" containerID="4f124a69a4f95d2c4d96ca6b9e0688a0301c08cfe32cffb613d7943057978efe" exitCode=2 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.326853 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g6nd6" event={"ID":"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c","Type":"ContainerDied","Data":"4f124a69a4f95d2c4d96ca6b9e0688a0301c08cfe32cffb613d7943057978efe"} Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.327446 4866 scope.go:117] "RemoveContainer" containerID="4f124a69a4f95d2c4d96ca6b9e0688a0301c08cfe32cffb613d7943057978efe" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.340389 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zrmrs_b977f313-87b4-4173-9263-91bc45047631/ovn-acl-logging/0.log" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.341570 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zrmrs_b977f313-87b4-4173-9263-91bc45047631/ovn-controller/0.log" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342159 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" exitCode=0 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342179 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" exitCode=0 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342191 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" exitCode=143 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342198 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" exitCode=143 Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342235 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f"} Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342262 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4"} Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342280 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca"} Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.342312 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4"} Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.477613 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zrmrs_b977f313-87b4-4173-9263-91bc45047631/ovn-acl-logging/0.log" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.478230 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zrmrs_b977f313-87b4-4173-9263-91bc45047631/ovn-controller/0.log" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.478653 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530577 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vtxps"] Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530816 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-acl-logging" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530836 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-acl-logging" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530851 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="northd" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530859 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="northd" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530868 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-node" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530877 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-node" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530892 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530900 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530910 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="sbdb" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530918 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="sbdb" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530930 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovnkube-controller" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530937 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovnkube-controller" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530944 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-controller" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530951 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-controller" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530959 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="nbdb" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530966 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="nbdb" Dec 13 22:27:48 crc kubenswrapper[4866]: E1213 22:27:48.530976 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kubecfg-setup" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.530984 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kubecfg-setup" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531118 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="sbdb" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531133 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531144 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="nbdb" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531152 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="northd" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531163 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-controller" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531174 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovn-acl-logging" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531185 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="kube-rbac-proxy-node" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.531195 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977f313-87b4-4173-9263-91bc45047631" containerName="ovnkube-controller" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.533291 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540175 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovn-node-metrics-cert\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540340 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-slash\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540458 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-etc-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540546 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-systemd-units\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540641 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-cni-bin\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540728 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540825 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.540921 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-var-lib-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541027 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-run-netns\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541135 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p658\" (UniqueName: \"kubernetes.io/projected/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-kube-api-access-2p658\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541231 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-node-log\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541375 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovnkube-config\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541479 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-ovn\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541559 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-log-socket\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541650 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541732 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-kubelet\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541857 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovnkube-script-lib\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.541941 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-cni-netd\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.542019 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-env-overrides\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.542181 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-systemd\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642545 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-netns\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642596 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8flqm\" (UniqueName: \"kubernetes.io/projected/b977f313-87b4-4173-9263-91bc45047631-kube-api-access-8flqm\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642619 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-systemd-units\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642653 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642672 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b977f313-87b4-4173-9263-91bc45047631-ovn-node-metrics-cert\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642690 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-env-overrides\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642706 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-systemd\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642719 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-etc-openvswitch\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642732 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-kubelet\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642748 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-netd\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642762 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-ovn-kubernetes\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642776 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-openvswitch\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642790 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-node-log\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642806 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-var-lib-openvswitch\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642827 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-slash\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642840 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-log-socket\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642855 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-script-lib\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642869 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-ovn\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642890 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-bin\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642904 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-config\") pod \"b977f313-87b4-4173-9263-91bc45047631\" (UID: \"b977f313-87b4-4173-9263-91bc45047631\") " Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642962 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-cni-bin\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642979 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642997 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643015 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-var-lib-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643029 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-run-netns\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643044 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p658\" (UniqueName: \"kubernetes.io/projected/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-kube-api-access-2p658\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643075 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-node-log\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643090 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovnkube-config\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643113 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-ovn\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643127 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-log-socket\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643154 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643171 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-kubelet\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643187 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovnkube-script-lib\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643202 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-cni-netd\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643218 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-env-overrides\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643239 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-systemd\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643258 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovn-node-metrics-cert\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643273 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-slash\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643286 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-etc-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643305 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-systemd-units\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643365 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-systemd-units\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642677 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642713 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.642988 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643005 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.643943 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644063 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644139 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-slash" (OuterVolumeSpecName: "host-slash") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644168 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644185 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644201 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644218 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644236 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-node-log" (OuterVolumeSpecName: "node-log") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644253 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644282 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644307 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-var-lib-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644325 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-run-netns\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644143 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-log-socket" (OuterVolumeSpecName: "log-socket") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.644544 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-node-log\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645078 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovnkube-config\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645117 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-ovn\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645137 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-log-socket\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645209 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645279 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645471 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-env-overrides\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645504 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-kubelet\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645525 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-cni-netd\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645525 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645534 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-cni-bin\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645568 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-run-systemd\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645597 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-etc-openvswitch\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645597 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-host-slash\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.645661 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.646129 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovnkube-script-lib\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.646524 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.647805 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b977f313-87b4-4173-9263-91bc45047631-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.647911 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b977f313-87b4-4173-9263-91bc45047631-kube-api-access-8flqm" (OuterVolumeSpecName: "kube-api-access-8flqm") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "kube-api-access-8flqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.648386 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-ovn-node-metrics-cert\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.660160 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b977f313-87b4-4173-9263-91bc45047631" (UID: "b977f313-87b4-4173-9263-91bc45047631"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.661362 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p658\" (UniqueName: \"kubernetes.io/projected/0a0eb58c-f412-4d15-acb9-19e0c33bc56c-kube-api-access-2p658\") pod \"ovnkube-node-vtxps\" (UID: \"0a0eb58c-f412-4d15-acb9-19e0c33bc56c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744292 4866 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744327 4866 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744338 4866 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-node-log\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744349 4866 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744360 4866 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-slash\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744371 4866 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-log-socket\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744382 4866 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744392 4866 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744400 4866 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744407 4866 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744415 4866 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744424 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8flqm\" (UniqueName: \"kubernetes.io/projected/b977f313-87b4-4173-9263-91bc45047631-kube-api-access-8flqm\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744432 4866 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744440 4866 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744450 4866 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b977f313-87b4-4173-9263-91bc45047631-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744459 4866 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b977f313-87b4-4173-9263-91bc45047631-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744467 4866 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744474 4866 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744482 4866 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.744489 4866 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b977f313-87b4-4173-9263-91bc45047631-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.847588 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:48 crc kubenswrapper[4866]: I1213 22:27:48.859896 4866 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 22:27:48 crc kubenswrapper[4866]: W1213 22:27:48.874582 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a0eb58c_f412_4d15_acb9_19e0c33bc56c.slice/crio-44c29d111e59201c2105d48b7b507c81d8626435d15412f46d6fd0b1c38cfcaa WatchSource:0}: Error finding container 44c29d111e59201c2105d48b7b507c81d8626435d15412f46d6fd0b1c38cfcaa: Status 404 returned error can't find the container with id 44c29d111e59201c2105d48b7b507c81d8626435d15412f46d6fd0b1c38cfcaa Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.348220 4866 generic.go:334] "Generic (PLEG): container finished" podID="0a0eb58c-f412-4d15-acb9-19e0c33bc56c" containerID="57675cc18ac1dca35fdec8c766da4623f41abd104db71a02c81c7520afd30fcf" exitCode=0 Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.348384 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerDied","Data":"57675cc18ac1dca35fdec8c766da4623f41abd104db71a02c81c7520afd30fcf"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.348499 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"44c29d111e59201c2105d48b7b507c81d8626435d15412f46d6fd0b1c38cfcaa"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.351469 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g6nd6_7d8d0363-27e3-4269-8bf3-33fd2cf3af5c/kube-multus/0.log" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.351572 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g6nd6" event={"ID":"7d8d0363-27e3-4269-8bf3-33fd2cf3af5c","Type":"ContainerStarted","Data":"7dc0b009de45808236fc01ed0a9db7a08b07c1028fc020717037f8bf011b85a7"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.355199 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zrmrs_b977f313-87b4-4173-9263-91bc45047631/ovn-acl-logging/0.log" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.355733 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zrmrs_b977f313-87b4-4173-9263-91bc45047631/ovn-controller/0.log" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356142 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" exitCode=0 Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356166 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" exitCode=0 Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356176 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" exitCode=0 Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356185 4866 generic.go:334] "Generic (PLEG): container finished" podID="b977f313-87b4-4173-9263-91bc45047631" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" exitCode=0 Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356205 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356229 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356243 4866 scope.go:117] "RemoveContainer" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356243 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356325 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356339 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" event={"ID":"b977f313-87b4-4173-9263-91bc45047631","Type":"ContainerDied","Data":"5d791af58011d92d7db58ac7570aef1c803edc5fd691e1ba8b21a05e423bfeab"} Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.356230 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zrmrs" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.375998 4866 scope.go:117] "RemoveContainer" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.401732 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zrmrs"] Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.403501 4866 scope.go:117] "RemoveContainer" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.405160 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zrmrs"] Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.433360 4866 scope.go:117] "RemoveContainer" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.452524 4866 scope.go:117] "RemoveContainer" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.478397 4866 scope.go:117] "RemoveContainer" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.492182 4866 scope.go:117] "RemoveContainer" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.504584 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-7w2tf" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.509219 4866 scope.go:117] "RemoveContainer" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.550729 4866 scope.go:117] "RemoveContainer" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.572920 4866 scope.go:117] "RemoveContainer" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.573507 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": container with ID starting with eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95 not found: ID does not exist" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.573548 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95"} err="failed to get container status \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": rpc error: code = NotFound desc = could not find container \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": container with ID starting with eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.573580 4866 scope.go:117] "RemoveContainer" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.573896 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": container with ID starting with 01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4 not found: ID does not exist" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.573918 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4"} err="failed to get container status \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": rpc error: code = NotFound desc = could not find container \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": container with ID starting with 01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.573931 4866 scope.go:117] "RemoveContainer" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.574203 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": container with ID starting with b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907 not found: ID does not exist" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.574231 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907"} err="failed to get container status \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": rpc error: code = NotFound desc = could not find container \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": container with ID starting with b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.574250 4866 scope.go:117] "RemoveContainer" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.574688 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": container with ID starting with 43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4 not found: ID does not exist" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.574713 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4"} err="failed to get container status \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": rpc error: code = NotFound desc = could not find container \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": container with ID starting with 43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.574731 4866 scope.go:117] "RemoveContainer" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.574980 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": container with ID starting with 92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f not found: ID does not exist" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575006 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f"} err="failed to get container status \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": rpc error: code = NotFound desc = could not find container \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": container with ID starting with 92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575023 4866 scope.go:117] "RemoveContainer" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.575272 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": container with ID starting with d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4 not found: ID does not exist" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575297 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4"} err="failed to get container status \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": rpc error: code = NotFound desc = could not find container \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": container with ID starting with d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575315 4866 scope.go:117] "RemoveContainer" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.575610 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": container with ID starting with e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca not found: ID does not exist" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575628 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca"} err="failed to get container status \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": rpc error: code = NotFound desc = could not find container \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": container with ID starting with e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575642 4866 scope.go:117] "RemoveContainer" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.575938 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": container with ID starting with 28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4 not found: ID does not exist" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575958 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4"} err="failed to get container status \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": rpc error: code = NotFound desc = could not find container \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": container with ID starting with 28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.575969 4866 scope.go:117] "RemoveContainer" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" Dec 13 22:27:49 crc kubenswrapper[4866]: E1213 22:27:49.576201 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": container with ID starting with 26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e not found: ID does not exist" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576226 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e"} err="failed to get container status \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": rpc error: code = NotFound desc = could not find container \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": container with ID starting with 26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576241 4866 scope.go:117] "RemoveContainer" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576468 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95"} err="failed to get container status \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": rpc error: code = NotFound desc = could not find container \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": container with ID starting with eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576491 4866 scope.go:117] "RemoveContainer" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576682 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4"} err="failed to get container status \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": rpc error: code = NotFound desc = could not find container \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": container with ID starting with 01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576707 4866 scope.go:117] "RemoveContainer" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.576990 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907"} err="failed to get container status \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": rpc error: code = NotFound desc = could not find container \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": container with ID starting with b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577013 4866 scope.go:117] "RemoveContainer" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577295 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4"} err="failed to get container status \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": rpc error: code = NotFound desc = could not find container \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": container with ID starting with 43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577318 4866 scope.go:117] "RemoveContainer" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577607 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f"} err="failed to get container status \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": rpc error: code = NotFound desc = could not find container \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": container with ID starting with 92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577631 4866 scope.go:117] "RemoveContainer" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577816 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4"} err="failed to get container status \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": rpc error: code = NotFound desc = could not find container \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": container with ID starting with d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.577838 4866 scope.go:117] "RemoveContainer" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.578020 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca"} err="failed to get container status \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": rpc error: code = NotFound desc = could not find container \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": container with ID starting with e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.578038 4866 scope.go:117] "RemoveContainer" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.578315 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4"} err="failed to get container status \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": rpc error: code = NotFound desc = could not find container \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": container with ID starting with 28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.578334 4866 scope.go:117] "RemoveContainer" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.578952 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e"} err="failed to get container status \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": rpc error: code = NotFound desc = could not find container \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": container with ID starting with 26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.578979 4866 scope.go:117] "RemoveContainer" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.579235 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95"} err="failed to get container status \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": rpc error: code = NotFound desc = could not find container \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": container with ID starting with eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.579282 4866 scope.go:117] "RemoveContainer" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.579550 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4"} err="failed to get container status \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": rpc error: code = NotFound desc = could not find container \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": container with ID starting with 01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.579572 4866 scope.go:117] "RemoveContainer" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.579765 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907"} err="failed to get container status \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": rpc error: code = NotFound desc = could not find container \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": container with ID starting with b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.579785 4866 scope.go:117] "RemoveContainer" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580100 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4"} err="failed to get container status \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": rpc error: code = NotFound desc = could not find container \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": container with ID starting with 43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580119 4866 scope.go:117] "RemoveContainer" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580365 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f"} err="failed to get container status \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": rpc error: code = NotFound desc = could not find container \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": container with ID starting with 92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580389 4866 scope.go:117] "RemoveContainer" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580645 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4"} err="failed to get container status \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": rpc error: code = NotFound desc = could not find container \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": container with ID starting with d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580666 4866 scope.go:117] "RemoveContainer" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580861 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca"} err="failed to get container status \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": rpc error: code = NotFound desc = could not find container \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": container with ID starting with e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.580882 4866 scope.go:117] "RemoveContainer" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581173 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4"} err="failed to get container status \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": rpc error: code = NotFound desc = could not find container \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": container with ID starting with 28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581195 4866 scope.go:117] "RemoveContainer" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581460 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e"} err="failed to get container status \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": rpc error: code = NotFound desc = could not find container \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": container with ID starting with 26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581478 4866 scope.go:117] "RemoveContainer" containerID="eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581717 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95"} err="failed to get container status \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": rpc error: code = NotFound desc = could not find container \"eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95\": container with ID starting with eb3141a3c1881cd3e9ea259080b49b3440d2f7b6ac7642a67c10a29ce79c7d95 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581737 4866 scope.go:117] "RemoveContainer" containerID="01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581926 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4"} err="failed to get container status \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": rpc error: code = NotFound desc = could not find container \"01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4\": container with ID starting with 01bc4d54641f007a64ff420c9ed7037d8aeb139aefeae98a76d8bfd681e864b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.581946 4866 scope.go:117] "RemoveContainer" containerID="b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582159 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907"} err="failed to get container status \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": rpc error: code = NotFound desc = could not find container \"b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907\": container with ID starting with b26373da6f39692878640a993da4d753fbeaba62a096a177aaec37f9127cc907 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582280 4866 scope.go:117] "RemoveContainer" containerID="43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582503 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4"} err="failed to get container status \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": rpc error: code = NotFound desc = could not find container \"43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4\": container with ID starting with 43b3b5c647bff1a3b3a4b22b597ff27d27b3bfad6a6ca7da38701e9d227ff4b4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582525 4866 scope.go:117] "RemoveContainer" containerID="92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582726 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f"} err="failed to get container status \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": rpc error: code = NotFound desc = could not find container \"92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f\": container with ID starting with 92604ac1010b6b2e6507aa99019b84c38baa44ae21f5aaaa8c5f4ace1f2f624f not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582747 4866 scope.go:117] "RemoveContainer" containerID="d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.582983 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4"} err="failed to get container status \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": rpc error: code = NotFound desc = could not find container \"d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4\": container with ID starting with d16e93aa9094bb0751cef12e2cab89d2e1d63fcaebd2889b158c941bf81b3fd4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.583006 4866 scope.go:117] "RemoveContainer" containerID="e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.583265 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca"} err="failed to get container status \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": rpc error: code = NotFound desc = could not find container \"e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca\": container with ID starting with e229a65c98b5a52cee42cb1c0d0c8a56d694afa832e8db10ddbbfe0f75bd15ca not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.583288 4866 scope.go:117] "RemoveContainer" containerID="28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.583476 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4"} err="failed to get container status \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": rpc error: code = NotFound desc = could not find container \"28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4\": container with ID starting with 28ef25002d632b00ab6ff5ad0c473adf34634c8a7a1bb748da46bc7c5d0564d4 not found: ID does not exist" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.583491 4866 scope.go:117] "RemoveContainer" containerID="26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e" Dec 13 22:27:49 crc kubenswrapper[4866]: I1213 22:27:49.583702 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e"} err="failed to get container status \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": rpc error: code = NotFound desc = could not find container \"26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e\": container with ID starting with 26990efe2746cf0436d86476faf525d1d5616f6a7bf13637f6e79beab178592e not found: ID does not exist" Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.219869 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b977f313-87b4-4173-9263-91bc45047631" path="/var/lib/kubelet/pods/b977f313-87b4-4173-9263-91bc45047631/volumes" Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.363417 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"3e0473748fd526eafce4edbeae809efd2cb930f945906ea482bbb537df1d11bc"} Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.364267 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"fb02065217b9e8ae3ef36872d586f9bad5ce0aced99833ac25af851ca2274e14"} Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.364372 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"e1a43fc772d01d967611d2511a499f665b9e93daba9f7425daf9161755256f78"} Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.364435 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"2daaf58d861beb36adc9248bf09f0549d201fdbb129ede6b52abf173a6ac0726"} Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.364489 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"b8a3e39e8b98ec1b53ad7113f2fcc8d4779e5cdca3252b099da0051f5251a446"} Dec 13 22:27:50 crc kubenswrapper[4866]: I1213 22:27:50.364562 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"8fc4735a2e08bf132e578702cff4d12d31f364a6fc5652fa9163262d44adf969"} Dec 13 22:27:52 crc kubenswrapper[4866]: I1213 22:27:52.377340 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"a6d3b3451bdca68d097ae454a16c60c7dcda65f4f4790e1113e71658f0e17b81"} Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.414697 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" event={"ID":"0a0eb58c-f412-4d15-acb9-19e0c33bc56c","Type":"ContainerStarted","Data":"3e9aeeaf761a85b779c62f5f2c4c1477508d8d62db1396b928241dc95bdcdb7b"} Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.416169 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.416252 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.416313 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.456096 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.464332 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:27:55 crc kubenswrapper[4866]: I1213 22:27:55.489596 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" podStartSLOduration=7.489578442 podStartE2EDuration="7.489578442s" podCreationTimestamp="2025-12-13 22:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:27:55.447029129 +0000 UTC m=+673.488367681" watchObservedRunningTime="2025-12-13 22:27:55.489578442 +0000 UTC m=+673.530916994" Dec 13 22:28:18 crc kubenswrapper[4866]: I1213 22:28:18.879911 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vtxps" Dec 13 22:28:33 crc kubenswrapper[4866]: I1213 22:28:33.035526 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:28:33 crc kubenswrapper[4866]: I1213 22:28:33.036134 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.031173 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v58rx/must-gather-27sp2"] Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.032602 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.038684 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v58rx"/"openshift-service-ca.crt" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.039203 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v58rx"/"kube-root-ca.crt" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.055372 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v58rx/must-gather-27sp2"] Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.231541 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a1d3004-582c-4013-a776-abb703ec3228-must-gather-output\") pod \"must-gather-27sp2\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.231622 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-249lq\" (UniqueName: \"kubernetes.io/projected/7a1d3004-582c-4013-a776-abb703ec3228-kube-api-access-249lq\") pod \"must-gather-27sp2\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.333093 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-249lq\" (UniqueName: \"kubernetes.io/projected/7a1d3004-582c-4013-a776-abb703ec3228-kube-api-access-249lq\") pod \"must-gather-27sp2\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.333257 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a1d3004-582c-4013-a776-abb703ec3228-must-gather-output\") pod \"must-gather-27sp2\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.333660 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a1d3004-582c-4013-a776-abb703ec3228-must-gather-output\") pod \"must-gather-27sp2\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.352234 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-249lq\" (UniqueName: \"kubernetes.io/projected/7a1d3004-582c-4013-a776-abb703ec3228-kube-api-access-249lq\") pod \"must-gather-27sp2\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.645297 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:28:46 crc kubenswrapper[4866]: I1213 22:28:46.839212 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v58rx/must-gather-27sp2"] Dec 13 22:28:47 crc kubenswrapper[4866]: I1213 22:28:47.736104 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v58rx/must-gather-27sp2" event={"ID":"7a1d3004-582c-4013-a776-abb703ec3228","Type":"ContainerStarted","Data":"1a2a0023d6c45eaa37c92c1df93cd12e957dc5f8163e182c4582d3f6dcaf5ecd"} Dec 13 22:28:55 crc kubenswrapper[4866]: I1213 22:28:55.327557 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v58rx/must-gather-27sp2" event={"ID":"7a1d3004-582c-4013-a776-abb703ec3228","Type":"ContainerStarted","Data":"2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c"} Dec 13 22:28:55 crc kubenswrapper[4866]: I1213 22:28:55.327993 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v58rx/must-gather-27sp2" event={"ID":"7a1d3004-582c-4013-a776-abb703ec3228","Type":"ContainerStarted","Data":"f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788"} Dec 13 22:28:55 crc kubenswrapper[4866]: I1213 22:28:55.344527 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v58rx/must-gather-27sp2" podStartSLOduration=2.698983647 podStartE2EDuration="10.344505306s" podCreationTimestamp="2025-12-13 22:28:45 +0000 UTC" firstStartedPulling="2025-12-13 22:28:46.845321034 +0000 UTC m=+724.886659586" lastFinishedPulling="2025-12-13 22:28:54.490842673 +0000 UTC m=+732.532181245" observedRunningTime="2025-12-13 22:28:55.341038843 +0000 UTC m=+733.382377455" watchObservedRunningTime="2025-12-13 22:28:55.344505306 +0000 UTC m=+733.385843858" Dec 13 22:29:03 crc kubenswrapper[4866]: I1213 22:29:03.035610 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:29:03 crc kubenswrapper[4866]: I1213 22:29:03.035923 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:29:29 crc kubenswrapper[4866]: I1213 22:29:29.044881 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vt92h_e7448d7f-ba89-4749-9cf2-60e55cffd82b/control-plane-machine-set-operator/0.log" Dec 13 22:29:29 crc kubenswrapper[4866]: I1213 22:29:29.218208 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ppg68_6b8290a2-1abc-4343-8aa9-27a3f16f64f7/machine-api-operator/0.log" Dec 13 22:29:29 crc kubenswrapper[4866]: I1213 22:29:29.241672 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ppg68_6b8290a2-1abc-4343-8aa9-27a3f16f64f7/kube-rbac-proxy/0.log" Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.035680 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.036009 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.036089 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.036567 4866 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae5acdfbac1d0aef7817a1dd14d929d1361d26782bef23ac5515732bb6d98ab6"} pod="openshift-machine-config-operator/machine-config-daemon-2855n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.036616 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" containerID="cri-o://ae5acdfbac1d0aef7817a1dd14d929d1361d26782bef23ac5515732bb6d98ab6" gracePeriod=600 Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.527646 4866 generic.go:334] "Generic (PLEG): container finished" podID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerID="ae5acdfbac1d0aef7817a1dd14d929d1361d26782bef23ac5515732bb6d98ab6" exitCode=0 Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.527946 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerDied","Data":"ae5acdfbac1d0aef7817a1dd14d929d1361d26782bef23ac5515732bb6d98ab6"} Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.527975 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"82ef2bb4b2988326dcf74ec119d32e88da7c55e5f085330cd1ea94e966204350"} Dec 13 22:29:33 crc kubenswrapper[4866]: I1213 22:29:33.527994 4866 scope.go:117] "RemoveContainer" containerID="ad1a6b23e698b82e4bfd39f1d3e7801aec506999eb17e78abc5dcf82b4f13e6f" Dec 13 22:29:40 crc kubenswrapper[4866]: I1213 22:29:40.308842 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-dk4x7_7b97b708-fde5-45b4-97e4-b27430b2751a/cert-manager-controller/0.log" Dec 13 22:29:40 crc kubenswrapper[4866]: I1213 22:29:40.459322 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mv76r_69d95c52-0a76-4712-8380-1e38eba5643a/cert-manager-cainjector/0.log" Dec 13 22:29:40 crc kubenswrapper[4866]: I1213 22:29:40.511549 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-7w2tf_1cfa97e6-d9ad-4e17-925b-293ddca0f525/cert-manager-webhook/0.log" Dec 13 22:29:55 crc kubenswrapper[4866]: I1213 22:29:55.905141 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/extract-utilities/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.077883 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/extract-utilities/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.113251 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/extract-content/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.157203 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/extract-content/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.313932 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/extract-utilities/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.372265 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/extract-content/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.387258 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-khmc8_70a5d2fc-f001-494b-aa01-cb967df89845/registry-server/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.526850 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/extract-utilities/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.697003 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/extract-utilities/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.732150 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/extract-content/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.735262 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/extract-content/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.902096 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/extract-content/0.log" Dec 13 22:29:56 crc kubenswrapper[4866]: I1213 22:29:56.967239 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/extract-utilities/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.030632 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4lk5n_e8d7fcbe-339e-4276-852c-0f959263f9d4/registry-server/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.237967 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mjtlq_814d3162-a076-488a-9162-b5154651254d/marketplace-operator/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.256414 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/extract-utilities/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.476296 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/extract-content/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.498638 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/extract-utilities/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.522405 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/extract-content/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.718274 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/extract-content/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.745941 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/registry-server/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.760984 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2x8bb_c3d9cb6c-33b5-426f-9302-645ee78042d8/extract-utilities/0.log" Dec 13 22:29:57 crc kubenswrapper[4866]: I1213 22:29:57.902413 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/extract-utilities/0.log" Dec 13 22:29:58 crc kubenswrapper[4866]: I1213 22:29:58.077418 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/extract-utilities/0.log" Dec 13 22:29:58 crc kubenswrapper[4866]: I1213 22:29:58.082448 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/extract-content/0.log" Dec 13 22:29:58 crc kubenswrapper[4866]: I1213 22:29:58.132417 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/extract-content/0.log" Dec 13 22:29:58 crc kubenswrapper[4866]: I1213 22:29:58.255920 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/extract-utilities/0.log" Dec 13 22:29:58 crc kubenswrapper[4866]: I1213 22:29:58.288999 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/extract-content/0.log" Dec 13 22:29:58 crc kubenswrapper[4866]: I1213 22:29:58.345241 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xtkpz_106fc235-f259-42cd-9457-61b834ebe9e4/registry-server/0.log" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.179882 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh"] Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.181068 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.183829 4866 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.184118 4866 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.189569 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh"] Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.335457 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba18fca9-f62e-48d7-b100-e19e46a31964-config-volume\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.335499 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zkw2\" (UniqueName: \"kubernetes.io/projected/ba18fca9-f62e-48d7-b100-e19e46a31964-kube-api-access-4zkw2\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.335523 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba18fca9-f62e-48d7-b100-e19e46a31964-secret-volume\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.436757 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba18fca9-f62e-48d7-b100-e19e46a31964-config-volume\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.436818 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zkw2\" (UniqueName: \"kubernetes.io/projected/ba18fca9-f62e-48d7-b100-e19e46a31964-kube-api-access-4zkw2\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.436844 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba18fca9-f62e-48d7-b100-e19e46a31964-secret-volume\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.437762 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba18fca9-f62e-48d7-b100-e19e46a31964-config-volume\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.444812 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba18fca9-f62e-48d7-b100-e19e46a31964-secret-volume\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.455680 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zkw2\" (UniqueName: \"kubernetes.io/projected/ba18fca9-f62e-48d7-b100-e19e46a31964-kube-api-access-4zkw2\") pod \"collect-profiles-29427750-msdhh\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.512421 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:00 crc kubenswrapper[4866]: I1213 22:30:00.916675 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh"] Dec 13 22:30:01 crc kubenswrapper[4866]: I1213 22:30:01.653789 4866 generic.go:334] "Generic (PLEG): container finished" podID="ba18fca9-f62e-48d7-b100-e19e46a31964" containerID="c9cf238aeed2b702470e39ee3be3a888a52f634effa03801bbb88a17cfe45690" exitCode=0 Dec 13 22:30:01 crc kubenswrapper[4866]: I1213 22:30:01.653895 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" event={"ID":"ba18fca9-f62e-48d7-b100-e19e46a31964","Type":"ContainerDied","Data":"c9cf238aeed2b702470e39ee3be3a888a52f634effa03801bbb88a17cfe45690"} Dec 13 22:30:01 crc kubenswrapper[4866]: I1213 22:30:01.654077 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" event={"ID":"ba18fca9-f62e-48d7-b100-e19e46a31964","Type":"ContainerStarted","Data":"f96587d21ce98beaf2cc448087675045992b05ab3090d3640be9438be1c90682"} Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.445812 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ks89f"] Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.447029 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.461595 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks89f"] Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.474559 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtl59\" (UniqueName: \"kubernetes.io/projected/177f2232-5bf0-4ba1-babf-f5efc2849b3c-kube-api-access-xtl59\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.474636 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-catalog-content\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.474699 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-utilities\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.575678 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtl59\" (UniqueName: \"kubernetes.io/projected/177f2232-5bf0-4ba1-babf-f5efc2849b3c-kube-api-access-xtl59\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.575740 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-catalog-content\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.575845 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-utilities\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.576325 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-catalog-content\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.576533 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-utilities\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.610566 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtl59\" (UniqueName: \"kubernetes.io/projected/177f2232-5bf0-4ba1-babf-f5efc2849b3c-kube-api-access-xtl59\") pod \"certified-operators-ks89f\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.775218 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.949373 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.979937 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba18fca9-f62e-48d7-b100-e19e46a31964-config-volume\") pod \"ba18fca9-f62e-48d7-b100-e19e46a31964\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.979991 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba18fca9-f62e-48d7-b100-e19e46a31964-secret-volume\") pod \"ba18fca9-f62e-48d7-b100-e19e46a31964\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.980019 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zkw2\" (UniqueName: \"kubernetes.io/projected/ba18fca9-f62e-48d7-b100-e19e46a31964-kube-api-access-4zkw2\") pod \"ba18fca9-f62e-48d7-b100-e19e46a31964\" (UID: \"ba18fca9-f62e-48d7-b100-e19e46a31964\") " Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.981083 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba18fca9-f62e-48d7-b100-e19e46a31964-config-volume" (OuterVolumeSpecName: "config-volume") pod "ba18fca9-f62e-48d7-b100-e19e46a31964" (UID: "ba18fca9-f62e-48d7-b100-e19e46a31964"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.989576 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba18fca9-f62e-48d7-b100-e19e46a31964-kube-api-access-4zkw2" (OuterVolumeSpecName: "kube-api-access-4zkw2") pod "ba18fca9-f62e-48d7-b100-e19e46a31964" (UID: "ba18fca9-f62e-48d7-b100-e19e46a31964"). InnerVolumeSpecName "kube-api-access-4zkw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:30:02 crc kubenswrapper[4866]: I1213 22:30:02.990807 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba18fca9-f62e-48d7-b100-e19e46a31964-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ba18fca9-f62e-48d7-b100-e19e46a31964" (UID: "ba18fca9-f62e-48d7-b100-e19e46a31964"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.086209 4866 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba18fca9-f62e-48d7-b100-e19e46a31964-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.086239 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zkw2\" (UniqueName: \"kubernetes.io/projected/ba18fca9-f62e-48d7-b100-e19e46a31964-kube-api-access-4zkw2\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.086249 4866 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba18fca9-f62e-48d7-b100-e19e46a31964-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.098899 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks89f"] Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.664683 4866 generic.go:334] "Generic (PLEG): container finished" podID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerID="f2171febecc735b9648b41261d5bc56c550cc5576e528344cfa8d5b5e590ec22" exitCode=0 Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.664746 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerDied","Data":"f2171febecc735b9648b41261d5bc56c550cc5576e528344cfa8d5b5e590ec22"} Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.665272 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerStarted","Data":"dad1b836a59f15a086c612b3cab75ce4670c5c66b74456e512da45bc78dc8f52"} Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.666831 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" event={"ID":"ba18fca9-f62e-48d7-b100-e19e46a31964","Type":"ContainerDied","Data":"f96587d21ce98beaf2cc448087675045992b05ab3090d3640be9438be1c90682"} Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.666847 4866 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f96587d21ce98beaf2cc448087675045992b05ab3090d3640be9438be1c90682" Dec 13 22:30:03 crc kubenswrapper[4866]: I1213 22:30:03.666891 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29427750-msdhh" Dec 13 22:30:04 crc kubenswrapper[4866]: I1213 22:30:04.673135 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerStarted","Data":"f2ce2072d64031357f1a33c27573f49ab5e23f46077f2988003218cfa36b49dc"} Dec 13 22:30:05 crc kubenswrapper[4866]: I1213 22:30:05.680629 4866 generic.go:334] "Generic (PLEG): container finished" podID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerID="f2ce2072d64031357f1a33c27573f49ab5e23f46077f2988003218cfa36b49dc" exitCode=0 Dec 13 22:30:05 crc kubenswrapper[4866]: I1213 22:30:05.680679 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerDied","Data":"f2ce2072d64031357f1a33c27573f49ab5e23f46077f2988003218cfa36b49dc"} Dec 13 22:30:06 crc kubenswrapper[4866]: I1213 22:30:06.687210 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerStarted","Data":"06dcc1d81c0a1213f56f1f7f0c04aaaddbc8c57d911581a0f69d7b6ae09fbe7d"} Dec 13 22:30:12 crc kubenswrapper[4866]: I1213 22:30:12.775896 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:12 crc kubenswrapper[4866]: I1213 22:30:12.776251 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:12 crc kubenswrapper[4866]: I1213 22:30:12.821802 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:12 crc kubenswrapper[4866]: I1213 22:30:12.836267 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ks89f" podStartSLOduration=8.251784445 podStartE2EDuration="10.836251715s" podCreationTimestamp="2025-12-13 22:30:02 +0000 UTC" firstStartedPulling="2025-12-13 22:30:03.666123658 +0000 UTC m=+801.707462200" lastFinishedPulling="2025-12-13 22:30:06.250590918 +0000 UTC m=+804.291929470" observedRunningTime="2025-12-13 22:30:06.709766878 +0000 UTC m=+804.751105440" watchObservedRunningTime="2025-12-13 22:30:12.836251715 +0000 UTC m=+810.877590267" Dec 13 22:30:13 crc kubenswrapper[4866]: I1213 22:30:13.768453 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:13 crc kubenswrapper[4866]: I1213 22:30:13.812687 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks89f"] Dec 13 22:30:15 crc kubenswrapper[4866]: I1213 22:30:15.726896 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ks89f" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="registry-server" containerID="cri-o://06dcc1d81c0a1213f56f1f7f0c04aaaddbc8c57d911581a0f69d7b6ae09fbe7d" gracePeriod=2 Dec 13 22:30:17 crc kubenswrapper[4866]: I1213 22:30:17.739209 4866 generic.go:334] "Generic (PLEG): container finished" podID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerID="06dcc1d81c0a1213f56f1f7f0c04aaaddbc8c57d911581a0f69d7b6ae09fbe7d" exitCode=0 Dec 13 22:30:17 crc kubenswrapper[4866]: I1213 22:30:17.739288 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerDied","Data":"06dcc1d81c0a1213f56f1f7f0c04aaaddbc8c57d911581a0f69d7b6ae09fbe7d"} Dec 13 22:30:17 crc kubenswrapper[4866]: I1213 22:30:17.935372 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.080980 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-catalog-content\") pod \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.081190 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-utilities\") pod \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.081252 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtl59\" (UniqueName: \"kubernetes.io/projected/177f2232-5bf0-4ba1-babf-f5efc2849b3c-kube-api-access-xtl59\") pod \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\" (UID: \"177f2232-5bf0-4ba1-babf-f5efc2849b3c\") " Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.082275 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-utilities" (OuterVolumeSpecName: "utilities") pod "177f2232-5bf0-4ba1-babf-f5efc2849b3c" (UID: "177f2232-5bf0-4ba1-babf-f5efc2849b3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.098168 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177f2232-5bf0-4ba1-babf-f5efc2849b3c-kube-api-access-xtl59" (OuterVolumeSpecName: "kube-api-access-xtl59") pod "177f2232-5bf0-4ba1-babf-f5efc2849b3c" (UID: "177f2232-5bf0-4ba1-babf-f5efc2849b3c"). InnerVolumeSpecName "kube-api-access-xtl59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.139691 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "177f2232-5bf0-4ba1-babf-f5efc2849b3c" (UID: "177f2232-5bf0-4ba1-babf-f5efc2849b3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.182674 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.182707 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177f2232-5bf0-4ba1-babf-f5efc2849b3c-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.182716 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtl59\" (UniqueName: \"kubernetes.io/projected/177f2232-5bf0-4ba1-babf-f5efc2849b3c-kube-api-access-xtl59\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.745863 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks89f" event={"ID":"177f2232-5bf0-4ba1-babf-f5efc2849b3c","Type":"ContainerDied","Data":"dad1b836a59f15a086c612b3cab75ce4670c5c66b74456e512da45bc78dc8f52"} Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.745921 4866 scope.go:117] "RemoveContainer" containerID="06dcc1d81c0a1213f56f1f7f0c04aaaddbc8c57d911581a0f69d7b6ae09fbe7d" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.746016 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks89f" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.766772 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks89f"] Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.773717 4866 scope.go:117] "RemoveContainer" containerID="f2ce2072d64031357f1a33c27573f49ab5e23f46077f2988003218cfa36b49dc" Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.779630 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ks89f"] Dec 13 22:30:18 crc kubenswrapper[4866]: I1213 22:30:18.792716 4866 scope.go:117] "RemoveContainer" containerID="f2171febecc735b9648b41261d5bc56c550cc5576e528344cfa8d5b5e590ec22" Dec 13 22:30:20 crc kubenswrapper[4866]: I1213 22:30:20.221149 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" path="/var/lib/kubelet/pods/177f2232-5bf0-4ba1-babf-f5efc2849b3c/volumes" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.468422 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kfrnv"] Dec 13 22:30:30 crc kubenswrapper[4866]: E1213 22:30:30.469209 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="registry-server" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.469225 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="registry-server" Dec 13 22:30:30 crc kubenswrapper[4866]: E1213 22:30:30.469242 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="extract-utilities" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.469251 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="extract-utilities" Dec 13 22:30:30 crc kubenswrapper[4866]: E1213 22:30:30.469269 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="extract-content" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.469278 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="extract-content" Dec 13 22:30:30 crc kubenswrapper[4866]: E1213 22:30:30.469288 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba18fca9-f62e-48d7-b100-e19e46a31964" containerName="collect-profiles" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.469296 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba18fca9-f62e-48d7-b100-e19e46a31964" containerName="collect-profiles" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.469417 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba18fca9-f62e-48d7-b100-e19e46a31964" containerName="collect-profiles" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.469434 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="177f2232-5bf0-4ba1-babf-f5efc2849b3c" containerName="registry-server" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.470304 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.476592 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfrnv"] Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.531146 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktsl\" (UniqueName: \"kubernetes.io/projected/39e722b6-06b7-4070-bcf3-174fcc301848-kube-api-access-7ktsl\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.531405 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-catalog-content\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.531464 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-utilities\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.632511 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-utilities\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.632619 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ktsl\" (UniqueName: \"kubernetes.io/projected/39e722b6-06b7-4070-bcf3-174fcc301848-kube-api-access-7ktsl\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.632641 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-catalog-content\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.632954 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-utilities\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.632993 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-catalog-content\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.651667 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ktsl\" (UniqueName: \"kubernetes.io/projected/39e722b6-06b7-4070-bcf3-174fcc301848-kube-api-access-7ktsl\") pod \"redhat-marketplace-kfrnv\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:30 crc kubenswrapper[4866]: I1213 22:30:30.829257 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:31 crc kubenswrapper[4866]: I1213 22:30:31.058458 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfrnv"] Dec 13 22:30:31 crc kubenswrapper[4866]: I1213 22:30:31.816533 4866 generic.go:334] "Generic (PLEG): container finished" podID="39e722b6-06b7-4070-bcf3-174fcc301848" containerID="fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee" exitCode=0 Dec 13 22:30:31 crc kubenswrapper[4866]: I1213 22:30:31.816578 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerDied","Data":"fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee"} Dec 13 22:30:31 crc kubenswrapper[4866]: I1213 22:30:31.816605 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerStarted","Data":"a458da3696173313e37c31ba5cb58afcfe7ef2e497d6d20cc32171bd8533fb40"} Dec 13 22:30:32 crc kubenswrapper[4866]: I1213 22:30:32.823115 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerStarted","Data":"488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836"} Dec 13 22:30:33 crc kubenswrapper[4866]: I1213 22:30:33.829370 4866 generic.go:334] "Generic (PLEG): container finished" podID="39e722b6-06b7-4070-bcf3-174fcc301848" containerID="488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836" exitCode=0 Dec 13 22:30:33 crc kubenswrapper[4866]: I1213 22:30:33.830097 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerDied","Data":"488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836"} Dec 13 22:30:34 crc kubenswrapper[4866]: I1213 22:30:34.835834 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerStarted","Data":"0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1"} Dec 13 22:30:34 crc kubenswrapper[4866]: I1213 22:30:34.858040 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kfrnv" podStartSLOduration=2.327277261 podStartE2EDuration="4.858025977s" podCreationTimestamp="2025-12-13 22:30:30 +0000 UTC" firstStartedPulling="2025-12-13 22:30:31.817781769 +0000 UTC m=+829.859120321" lastFinishedPulling="2025-12-13 22:30:34.348530485 +0000 UTC m=+832.389869037" observedRunningTime="2025-12-13 22:30:34.855598689 +0000 UTC m=+832.896937251" watchObservedRunningTime="2025-12-13 22:30:34.858025977 +0000 UTC m=+832.899364539" Dec 13 22:30:37 crc kubenswrapper[4866]: I1213 22:30:37.855705 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2rdw"] Dec 13 22:30:37 crc kubenswrapper[4866]: I1213 22:30:37.857121 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:37 crc kubenswrapper[4866]: I1213 22:30:37.877434 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2rdw"] Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.024706 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-catalog-content\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.024750 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps7bf\" (UniqueName: \"kubernetes.io/projected/3bf23dd3-c096-4f88-8bce-05be9902e173-kube-api-access-ps7bf\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.024789 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-utilities\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.125615 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-catalog-content\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.125674 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps7bf\" (UniqueName: \"kubernetes.io/projected/3bf23dd3-c096-4f88-8bce-05be9902e173-kube-api-access-ps7bf\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.125728 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-utilities\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.126174 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-catalog-content\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.126204 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-utilities\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.146472 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps7bf\" (UniqueName: \"kubernetes.io/projected/3bf23dd3-c096-4f88-8bce-05be9902e173-kube-api-access-ps7bf\") pod \"redhat-operators-k2rdw\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.171269 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.393079 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2rdw"] Dec 13 22:30:38 crc kubenswrapper[4866]: W1213 22:30:38.399083 4866 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf23dd3_c096_4f88_8bce_05be9902e173.slice/crio-feacdb3167578d707fabce1d23ff5cb6bad73c62b7a82db6e14aaea1472c03cd WatchSource:0}: Error finding container feacdb3167578d707fabce1d23ff5cb6bad73c62b7a82db6e14aaea1472c03cd: Status 404 returned error can't find the container with id feacdb3167578d707fabce1d23ff5cb6bad73c62b7a82db6e14aaea1472c03cd Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.875018 4866 generic.go:334] "Generic (PLEG): container finished" podID="3bf23dd3-c096-4f88-8bce-05be9902e173" containerID="f16ad243c0b1bcffd7a461bbbfcd23c8a58f751e963a59f6cf7a5fdf52d25565" exitCode=0 Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.875098 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerDied","Data":"f16ad243c0b1bcffd7a461bbbfcd23c8a58f751e963a59f6cf7a5fdf52d25565"} Dec 13 22:30:38 crc kubenswrapper[4866]: I1213 22:30:38.875124 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerStarted","Data":"feacdb3167578d707fabce1d23ff5cb6bad73c62b7a82db6e14aaea1472c03cd"} Dec 13 22:30:39 crc kubenswrapper[4866]: I1213 22:30:39.881483 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerStarted","Data":"0b5f73f3125f0be29d45d5e183054d8352459d0ff609927ca7f63c2ee52ac114"} Dec 13 22:30:40 crc kubenswrapper[4866]: I1213 22:30:40.829755 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:40 crc kubenswrapper[4866]: I1213 22:30:40.831237 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:40 crc kubenswrapper[4866]: I1213 22:30:40.868075 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:40 crc kubenswrapper[4866]: I1213 22:30:40.889942 4866 generic.go:334] "Generic (PLEG): container finished" podID="3bf23dd3-c096-4f88-8bce-05be9902e173" containerID="0b5f73f3125f0be29d45d5e183054d8352459d0ff609927ca7f63c2ee52ac114" exitCode=0 Dec 13 22:30:40 crc kubenswrapper[4866]: I1213 22:30:40.889992 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerDied","Data":"0b5f73f3125f0be29d45d5e183054d8352459d0ff609927ca7f63c2ee52ac114"} Dec 13 22:30:40 crc kubenswrapper[4866]: I1213 22:30:40.926621 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:41 crc kubenswrapper[4866]: I1213 22:30:41.896438 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerStarted","Data":"34f6df54908d8c694d1187eefc8468feaeb5ef6c97d727c63a63ab91f9e35639"} Dec 13 22:30:43 crc kubenswrapper[4866]: I1213 22:30:43.247321 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2rdw" podStartSLOduration=3.739723821 podStartE2EDuration="6.247304392s" podCreationTimestamp="2025-12-13 22:30:37 +0000 UTC" firstStartedPulling="2025-12-13 22:30:38.885142804 +0000 UTC m=+836.926481356" lastFinishedPulling="2025-12-13 22:30:41.392723355 +0000 UTC m=+839.434061927" observedRunningTime="2025-12-13 22:30:41.916068579 +0000 UTC m=+839.957407141" watchObservedRunningTime="2025-12-13 22:30:43.247304392 +0000 UTC m=+841.288642944" Dec 13 22:30:43 crc kubenswrapper[4866]: I1213 22:30:43.248735 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfrnv"] Dec 13 22:30:43 crc kubenswrapper[4866]: I1213 22:30:43.905277 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kfrnv" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="registry-server" containerID="cri-o://0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1" gracePeriod=2 Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.842152 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.904490 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ktsl\" (UniqueName: \"kubernetes.io/projected/39e722b6-06b7-4070-bcf3-174fcc301848-kube-api-access-7ktsl\") pod \"39e722b6-06b7-4070-bcf3-174fcc301848\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.904597 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-catalog-content\") pod \"39e722b6-06b7-4070-bcf3-174fcc301848\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.904661 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-utilities\") pod \"39e722b6-06b7-4070-bcf3-174fcc301848\" (UID: \"39e722b6-06b7-4070-bcf3-174fcc301848\") " Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.905719 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-utilities" (OuterVolumeSpecName: "utilities") pod "39e722b6-06b7-4070-bcf3-174fcc301848" (UID: "39e722b6-06b7-4070-bcf3-174fcc301848"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.911560 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e722b6-06b7-4070-bcf3-174fcc301848-kube-api-access-7ktsl" (OuterVolumeSpecName: "kube-api-access-7ktsl") pod "39e722b6-06b7-4070-bcf3-174fcc301848" (UID: "39e722b6-06b7-4070-bcf3-174fcc301848"). InnerVolumeSpecName "kube-api-access-7ktsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.913242 4866 generic.go:334] "Generic (PLEG): container finished" podID="39e722b6-06b7-4070-bcf3-174fcc301848" containerID="0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1" exitCode=0 Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.913372 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerDied","Data":"0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1"} Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.913470 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfrnv" event={"ID":"39e722b6-06b7-4070-bcf3-174fcc301848","Type":"ContainerDied","Data":"a458da3696173313e37c31ba5cb58afcfe7ef2e497d6d20cc32171bd8533fb40"} Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.913559 4866 scope.go:117] "RemoveContainer" containerID="0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.913762 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfrnv" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.925019 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39e722b6-06b7-4070-bcf3-174fcc301848" (UID: "39e722b6-06b7-4070-bcf3-174fcc301848"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.932433 4866 scope.go:117] "RemoveContainer" containerID="488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.950836 4866 scope.go:117] "RemoveContainer" containerID="fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.970611 4866 scope.go:117] "RemoveContainer" containerID="0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1" Dec 13 22:30:44 crc kubenswrapper[4866]: E1213 22:30:44.970909 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1\": container with ID starting with 0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1 not found: ID does not exist" containerID="0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.970940 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1"} err="failed to get container status \"0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1\": rpc error: code = NotFound desc = could not find container \"0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1\": container with ID starting with 0aba6197a4a16bd9c0f5879a783ec171c0c25a2bae91feaa911390b9ffddb7b1 not found: ID does not exist" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.970960 4866 scope.go:117] "RemoveContainer" containerID="488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836" Dec 13 22:30:44 crc kubenswrapper[4866]: E1213 22:30:44.971176 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836\": container with ID starting with 488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836 not found: ID does not exist" containerID="488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.971196 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836"} err="failed to get container status \"488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836\": rpc error: code = NotFound desc = could not find container \"488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836\": container with ID starting with 488d913957d86ebc59cab033cf4bb2e8638f5ae2b809721a3259a66bee6bc836 not found: ID does not exist" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.971209 4866 scope.go:117] "RemoveContainer" containerID="fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee" Dec 13 22:30:44 crc kubenswrapper[4866]: E1213 22:30:44.971424 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee\": container with ID starting with fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee not found: ID does not exist" containerID="fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee" Dec 13 22:30:44 crc kubenswrapper[4866]: I1213 22:30:44.971443 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee"} err="failed to get container status \"fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee\": rpc error: code = NotFound desc = could not find container \"fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee\": container with ID starting with fc213ac40e8b23350c7341ccf249d1da9032be9137163bf4c84026fb0ed2d2ee not found: ID does not exist" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.005435 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.005699 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ktsl\" (UniqueName: \"kubernetes.io/projected/39e722b6-06b7-4070-bcf3-174fcc301848-kube-api-access-7ktsl\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.005711 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e722b6-06b7-4070-bcf3-174fcc301848-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.050670 4866 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ts95z"] Dec 13 22:30:45 crc kubenswrapper[4866]: E1213 22:30:45.050886 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="extract-utilities" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.050899 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="extract-utilities" Dec 13 22:30:45 crc kubenswrapper[4866]: E1213 22:30:45.050909 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="extract-content" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.050914 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="extract-content" Dec 13 22:30:45 crc kubenswrapper[4866]: E1213 22:30:45.050936 4866 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="registry-server" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.050942 4866 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="registry-server" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.051031 4866 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" containerName="registry-server" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.051752 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.066389 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ts95z"] Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.106356 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-catalog-content\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.106415 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tb6s\" (UniqueName: \"kubernetes.io/projected/88fa99d3-ec5d-4833-9507-18ddd8aecf20-kube-api-access-8tb6s\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.106468 4866 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-utilities\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.206974 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-utilities\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.207061 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-catalog-content\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.207102 4866 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tb6s\" (UniqueName: \"kubernetes.io/projected/88fa99d3-ec5d-4833-9507-18ddd8aecf20-kube-api-access-8tb6s\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.207618 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-utilities\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.207634 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-catalog-content\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.226580 4866 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tb6s\" (UniqueName: \"kubernetes.io/projected/88fa99d3-ec5d-4833-9507-18ddd8aecf20-kube-api-access-8tb6s\") pod \"community-operators-ts95z\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.239212 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfrnv"] Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.244736 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfrnv"] Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.366988 4866 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.682690 4866 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ts95z"] Dec 13 22:30:45 crc kubenswrapper[4866]: I1213 22:30:45.920451 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ts95z" event={"ID":"88fa99d3-ec5d-4833-9507-18ddd8aecf20","Type":"ContainerStarted","Data":"cfd0fc523d78d06db97e86640f4b81c4b8fd8e9276c162b8a1f24f7e39ef78d4"} Dec 13 22:30:46 crc kubenswrapper[4866]: I1213 22:30:46.219568 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e722b6-06b7-4070-bcf3-174fcc301848" path="/var/lib/kubelet/pods/39e722b6-06b7-4070-bcf3-174fcc301848/volumes" Dec 13 22:30:46 crc kubenswrapper[4866]: I1213 22:30:46.926433 4866 generic.go:334] "Generic (PLEG): container finished" podID="88fa99d3-ec5d-4833-9507-18ddd8aecf20" containerID="73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a" exitCode=0 Dec 13 22:30:46 crc kubenswrapper[4866]: I1213 22:30:46.926629 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ts95z" event={"ID":"88fa99d3-ec5d-4833-9507-18ddd8aecf20","Type":"ContainerDied","Data":"73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a"} Dec 13 22:30:47 crc kubenswrapper[4866]: I1213 22:30:47.933303 4866 generic.go:334] "Generic (PLEG): container finished" podID="88fa99d3-ec5d-4833-9507-18ddd8aecf20" containerID="5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd" exitCode=0 Dec 13 22:30:47 crc kubenswrapper[4866]: I1213 22:30:47.933345 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ts95z" event={"ID":"88fa99d3-ec5d-4833-9507-18ddd8aecf20","Type":"ContainerDied","Data":"5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd"} Dec 13 22:30:48 crc kubenswrapper[4866]: I1213 22:30:48.172209 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:48 crc kubenswrapper[4866]: I1213 22:30:48.172870 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:48 crc kubenswrapper[4866]: I1213 22:30:48.209713 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:48 crc kubenswrapper[4866]: I1213 22:30:48.940679 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ts95z" event={"ID":"88fa99d3-ec5d-4833-9507-18ddd8aecf20","Type":"ContainerStarted","Data":"b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536"} Dec 13 22:30:48 crc kubenswrapper[4866]: I1213 22:30:48.986316 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:49 crc kubenswrapper[4866]: I1213 22:30:49.001533 4866 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ts95z" podStartSLOduration=2.502363478 podStartE2EDuration="4.001520286s" podCreationTimestamp="2025-12-13 22:30:45 +0000 UTC" firstStartedPulling="2025-12-13 22:30:46.928285281 +0000 UTC m=+844.969623833" lastFinishedPulling="2025-12-13 22:30:48.427442069 +0000 UTC m=+846.468780641" observedRunningTime="2025-12-13 22:30:48.961447568 +0000 UTC m=+847.002786120" watchObservedRunningTime="2025-12-13 22:30:49.001520286 +0000 UTC m=+847.042858838" Dec 13 22:30:51 crc kubenswrapper[4866]: I1213 22:30:51.442262 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2rdw"] Dec 13 22:30:51 crc kubenswrapper[4866]: I1213 22:30:51.952523 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2rdw" podUID="3bf23dd3-c096-4f88-8bce-05be9902e173" containerName="registry-server" containerID="cri-o://34f6df54908d8c694d1187eefc8468feaeb5ef6c97d727c63a63ab91f9e35639" gracePeriod=2 Dec 13 22:30:53 crc kubenswrapper[4866]: I1213 22:30:53.967414 4866 generic.go:334] "Generic (PLEG): container finished" podID="3bf23dd3-c096-4f88-8bce-05be9902e173" containerID="34f6df54908d8c694d1187eefc8468feaeb5ef6c97d727c63a63ab91f9e35639" exitCode=0 Dec 13 22:30:53 crc kubenswrapper[4866]: I1213 22:30:53.967478 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerDied","Data":"34f6df54908d8c694d1187eefc8468feaeb5ef6c97d727c63a63ab91f9e35639"} Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.127798 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.134227 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-catalog-content\") pod \"3bf23dd3-c096-4f88-8bce-05be9902e173\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.134320 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps7bf\" (UniqueName: \"kubernetes.io/projected/3bf23dd3-c096-4f88-8bce-05be9902e173-kube-api-access-ps7bf\") pod \"3bf23dd3-c096-4f88-8bce-05be9902e173\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.134380 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-utilities\") pod \"3bf23dd3-c096-4f88-8bce-05be9902e173\" (UID: \"3bf23dd3-c096-4f88-8bce-05be9902e173\") " Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.135367 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-utilities" (OuterVolumeSpecName: "utilities") pod "3bf23dd3-c096-4f88-8bce-05be9902e173" (UID: "3bf23dd3-c096-4f88-8bce-05be9902e173"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.141375 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf23dd3-c096-4f88-8bce-05be9902e173-kube-api-access-ps7bf" (OuterVolumeSpecName: "kube-api-access-ps7bf") pod "3bf23dd3-c096-4f88-8bce-05be9902e173" (UID: "3bf23dd3-c096-4f88-8bce-05be9902e173"). InnerVolumeSpecName "kube-api-access-ps7bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.235629 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps7bf\" (UniqueName: \"kubernetes.io/projected/3bf23dd3-c096-4f88-8bce-05be9902e173-kube-api-access-ps7bf\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.235659 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.282484 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf23dd3-c096-4f88-8bce-05be9902e173" (UID: "3bf23dd3-c096-4f88-8bce-05be9902e173"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.337193 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf23dd3-c096-4f88-8bce-05be9902e173-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.975472 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2rdw" event={"ID":"3bf23dd3-c096-4f88-8bce-05be9902e173","Type":"ContainerDied","Data":"feacdb3167578d707fabce1d23ff5cb6bad73c62b7a82db6e14aaea1472c03cd"} Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.975536 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2rdw" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.976073 4866 scope.go:117] "RemoveContainer" containerID="34f6df54908d8c694d1187eefc8468feaeb5ef6c97d727c63a63ab91f9e35639" Dec 13 22:30:54 crc kubenswrapper[4866]: I1213 22:30:54.992097 4866 scope.go:117] "RemoveContainer" containerID="0b5f73f3125f0be29d45d5e183054d8352459d0ff609927ca7f63c2ee52ac114" Dec 13 22:30:55 crc kubenswrapper[4866]: I1213 22:30:55.016492 4866 scope.go:117] "RemoveContainer" containerID="f16ad243c0b1bcffd7a461bbbfcd23c8a58f751e963a59f6cf7a5fdf52d25565" Dec 13 22:30:55 crc kubenswrapper[4866]: I1213 22:30:55.023035 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2rdw"] Dec 13 22:30:55 crc kubenswrapper[4866]: I1213 22:30:55.030977 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2rdw"] Dec 13 22:30:55 crc kubenswrapper[4866]: I1213 22:30:55.367263 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:55 crc kubenswrapper[4866]: I1213 22:30:55.367300 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:55 crc kubenswrapper[4866]: I1213 22:30:55.421856 4866 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:56 crc kubenswrapper[4866]: I1213 22:30:56.021670 4866 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:56 crc kubenswrapper[4866]: I1213 22:30:56.219563 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf23dd3-c096-4f88-8bce-05be9902e173" path="/var/lib/kubelet/pods/3bf23dd3-c096-4f88-8bce-05be9902e173/volumes" Dec 13 22:30:56 crc kubenswrapper[4866]: I1213 22:30:56.841112 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ts95z"] Dec 13 22:30:57 crc kubenswrapper[4866]: I1213 22:30:57.993665 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ts95z" podUID="88fa99d3-ec5d-4833-9507-18ddd8aecf20" containerName="registry-server" containerID="cri-o://b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536" gracePeriod=2 Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.356687 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.389398 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tb6s\" (UniqueName: \"kubernetes.io/projected/88fa99d3-ec5d-4833-9507-18ddd8aecf20-kube-api-access-8tb6s\") pod \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.389448 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-utilities\") pod \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.389507 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-catalog-content\") pod \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\" (UID: \"88fa99d3-ec5d-4833-9507-18ddd8aecf20\") " Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.390748 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-utilities" (OuterVolumeSpecName: "utilities") pod "88fa99d3-ec5d-4833-9507-18ddd8aecf20" (UID: "88fa99d3-ec5d-4833-9507-18ddd8aecf20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.393246 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fa99d3-ec5d-4833-9507-18ddd8aecf20-kube-api-access-8tb6s" (OuterVolumeSpecName: "kube-api-access-8tb6s") pod "88fa99d3-ec5d-4833-9507-18ddd8aecf20" (UID: "88fa99d3-ec5d-4833-9507-18ddd8aecf20"). InnerVolumeSpecName "kube-api-access-8tb6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.445703 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88fa99d3-ec5d-4833-9507-18ddd8aecf20" (UID: "88fa99d3-ec5d-4833-9507-18ddd8aecf20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.490599 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tb6s\" (UniqueName: \"kubernetes.io/projected/88fa99d3-ec5d-4833-9507-18ddd8aecf20-kube-api-access-8tb6s\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.490630 4866 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:58 crc kubenswrapper[4866]: I1213 22:30:58.490642 4866 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fa99d3-ec5d-4833-9507-18ddd8aecf20-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.000721 4866 generic.go:334] "Generic (PLEG): container finished" podID="88fa99d3-ec5d-4833-9507-18ddd8aecf20" containerID="b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536" exitCode=0 Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.000780 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ts95z" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.000778 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ts95z" event={"ID":"88fa99d3-ec5d-4833-9507-18ddd8aecf20","Type":"ContainerDied","Data":"b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536"} Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.000846 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ts95z" event={"ID":"88fa99d3-ec5d-4833-9507-18ddd8aecf20","Type":"ContainerDied","Data":"cfd0fc523d78d06db97e86640f4b81c4b8fd8e9276c162b8a1f24f7e39ef78d4"} Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.000884 4866 scope.go:117] "RemoveContainer" containerID="b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.019262 4866 scope.go:117] "RemoveContainer" containerID="5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.035009 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ts95z"] Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.041461 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ts95z"] Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.046928 4866 scope.go:117] "RemoveContainer" containerID="73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.067624 4866 scope.go:117] "RemoveContainer" containerID="b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536" Dec 13 22:30:59 crc kubenswrapper[4866]: E1213 22:30:59.068099 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536\": container with ID starting with b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536 not found: ID does not exist" containerID="b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.068128 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536"} err="failed to get container status \"b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536\": rpc error: code = NotFound desc = could not find container \"b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536\": container with ID starting with b4a383d53ec05987a037c9c3a75c56dbfc913110915df68fc2193dd87c36b536 not found: ID does not exist" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.068148 4866 scope.go:117] "RemoveContainer" containerID="5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd" Dec 13 22:30:59 crc kubenswrapper[4866]: E1213 22:30:59.068559 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd\": container with ID starting with 5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd not found: ID does not exist" containerID="5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.068576 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd"} err="failed to get container status \"5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd\": rpc error: code = NotFound desc = could not find container \"5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd\": container with ID starting with 5dcbd80727116b7ac2e8bf1d13e55f1a3ad49ede54b60cc95361a8b54c2e0ddd not found: ID does not exist" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.068588 4866 scope.go:117] "RemoveContainer" containerID="73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a" Dec 13 22:30:59 crc kubenswrapper[4866]: E1213 22:30:59.068832 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a\": container with ID starting with 73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a not found: ID does not exist" containerID="73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a" Dec 13 22:30:59 crc kubenswrapper[4866]: I1213 22:30:59.068855 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a"} err="failed to get container status \"73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a\": rpc error: code = NotFound desc = could not find container \"73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a\": container with ID starting with 73eed59d7568c448875e1569525333228c13f02d79f27408814da6177516ae9a not found: ID does not exist" Dec 13 22:31:00 crc kubenswrapper[4866]: I1213 22:31:00.221599 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fa99d3-ec5d-4833-9507-18ddd8aecf20" path="/var/lib/kubelet/pods/88fa99d3-ec5d-4833-9507-18ddd8aecf20/volumes" Dec 13 22:31:08 crc kubenswrapper[4866]: I1213 22:31:08.045456 4866 generic.go:334] "Generic (PLEG): container finished" podID="7a1d3004-582c-4013-a776-abb703ec3228" containerID="f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788" exitCode=0 Dec 13 22:31:08 crc kubenswrapper[4866]: I1213 22:31:08.045939 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v58rx/must-gather-27sp2" event={"ID":"7a1d3004-582c-4013-a776-abb703ec3228","Type":"ContainerDied","Data":"f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788"} Dec 13 22:31:08 crc kubenswrapper[4866]: I1213 22:31:08.046415 4866 scope.go:117] "RemoveContainer" containerID="f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788" Dec 13 22:31:08 crc kubenswrapper[4866]: I1213 22:31:08.240741 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v58rx_must-gather-27sp2_7a1d3004-582c-4013-a776-abb703ec3228/gather/0.log" Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.029127 4866 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v58rx/must-gather-27sp2"] Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.029878 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v58rx/must-gather-27sp2" podUID="7a1d3004-582c-4013-a776-abb703ec3228" containerName="copy" containerID="cri-o://2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c" gracePeriod=2 Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.035295 4866 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v58rx/must-gather-27sp2"] Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.351380 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v58rx_must-gather-27sp2_7a1d3004-582c-4013-a776-abb703ec3228/copy/0.log" Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.352020 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.496163 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249lq\" (UniqueName: \"kubernetes.io/projected/7a1d3004-582c-4013-a776-abb703ec3228-kube-api-access-249lq\") pod \"7a1d3004-582c-4013-a776-abb703ec3228\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.496253 4866 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a1d3004-582c-4013-a776-abb703ec3228-must-gather-output\") pod \"7a1d3004-582c-4013-a776-abb703ec3228\" (UID: \"7a1d3004-582c-4013-a776-abb703ec3228\") " Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.515194 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1d3004-582c-4013-a776-abb703ec3228-kube-api-access-249lq" (OuterVolumeSpecName: "kube-api-access-249lq") pod "7a1d3004-582c-4013-a776-abb703ec3228" (UID: "7a1d3004-582c-4013-a776-abb703ec3228"). InnerVolumeSpecName "kube-api-access-249lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.569037 4866 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1d3004-582c-4013-a776-abb703ec3228-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7a1d3004-582c-4013-a776-abb703ec3228" (UID: "7a1d3004-582c-4013-a776-abb703ec3228"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.597978 4866 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249lq\" (UniqueName: \"kubernetes.io/projected/7a1d3004-582c-4013-a776-abb703ec3228-kube-api-access-249lq\") on node \"crc\" DevicePath \"\"" Dec 13 22:31:15 crc kubenswrapper[4866]: I1213 22:31:15.598030 4866 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a1d3004-582c-4013-a776-abb703ec3228-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.098225 4866 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v58rx_must-gather-27sp2_7a1d3004-582c-4013-a776-abb703ec3228/copy/0.log" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.098694 4866 generic.go:334] "Generic (PLEG): container finished" podID="7a1d3004-582c-4013-a776-abb703ec3228" containerID="2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c" exitCode=143 Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.098741 4866 scope.go:117] "RemoveContainer" containerID="2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.098826 4866 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v58rx/must-gather-27sp2" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.140167 4866 scope.go:117] "RemoveContainer" containerID="f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.180112 4866 scope.go:117] "RemoveContainer" containerID="2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c" Dec 13 22:31:16 crc kubenswrapper[4866]: E1213 22:31:16.180624 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c\": container with ID starting with 2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c not found: ID does not exist" containerID="2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.180682 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c"} err="failed to get container status \"2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c\": rpc error: code = NotFound desc = could not find container \"2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c\": container with ID starting with 2174653fd15703f32319804e72454faf3945ceb15b4333ac651f73b7b7aec21c not found: ID does not exist" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.180754 4866 scope.go:117] "RemoveContainer" containerID="f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788" Dec 13 22:31:16 crc kubenswrapper[4866]: E1213 22:31:16.181021 4866 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788\": container with ID starting with f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788 not found: ID does not exist" containerID="f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.181110 4866 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788"} err="failed to get container status \"f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788\": rpc error: code = NotFound desc = could not find container \"f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788\": container with ID starting with f725fc3a1b67b98aa80afa60c56310058fdf62e48ce2f46441e177061ce46788 not found: ID does not exist" Dec 13 22:31:16 crc kubenswrapper[4866]: I1213 22:31:16.222461 4866 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1d3004-582c-4013-a776-abb703ec3228" path="/var/lib/kubelet/pods/7a1d3004-582c-4013-a776-abb703ec3228/volumes" Dec 13 22:31:33 crc kubenswrapper[4866]: I1213 22:31:33.037032 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:31:33 crc kubenswrapper[4866]: I1213 22:31:33.037884 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:32:03 crc kubenswrapper[4866]: I1213 22:32:03.035550 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:32:03 crc kubenswrapper[4866]: I1213 22:32:03.036271 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.036438 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.036909 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.036945 4866 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2855n" Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.037427 4866 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82ef2bb4b2988326dcf74ec119d32e88da7c55e5f085330cd1ea94e966204350"} pod="openshift-machine-config-operator/machine-config-daemon-2855n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.037469 4866 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" containerID="cri-o://82ef2bb4b2988326dcf74ec119d32e88da7c55e5f085330cd1ea94e966204350" gracePeriod=600 Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.530331 4866 generic.go:334] "Generic (PLEG): container finished" podID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerID="82ef2bb4b2988326dcf74ec119d32e88da7c55e5f085330cd1ea94e966204350" exitCode=0 Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.530603 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerDied","Data":"82ef2bb4b2988326dcf74ec119d32e88da7c55e5f085330cd1ea94e966204350"} Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.530628 4866 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2855n" event={"ID":"9749ec6a-aa76-4ae0-a9d0-453edbf21bca","Type":"ContainerStarted","Data":"c1a408f46dd1aab9b46de6338270f96532c1a5be72486abe5bb0f4f3b5d4acc9"} Dec 13 22:32:33 crc kubenswrapper[4866]: I1213 22:32:33.530643 4866 scope.go:117] "RemoveContainer" containerID="ae5acdfbac1d0aef7817a1dd14d929d1361d26782bef23ac5515732bb6d98ab6" Dec 13 22:34:33 crc kubenswrapper[4866]: I1213 22:34:33.036900 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:34:33 crc kubenswrapper[4866]: I1213 22:34:33.037550 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 22:35:03 crc kubenswrapper[4866]: I1213 22:35:03.036220 4866 patch_prober.go:28] interesting pod/machine-config-daemon-2855n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 22:35:03 crc kubenswrapper[4866]: I1213 22:35:03.036739 4866 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2855n" podUID="9749ec6a-aa76-4ae0-a9d0-453edbf21bca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"